Kantian Division of Truth
From "Post-socialist Capitalism: The Rise of Enlightened Individualism" [Forthcoming]
Immanuel Kant (1724–1804) was a highly influential German philosopher whose philosophy is known as transcendental idealism and is associated with modern rationalism and empiricism (Rohlf 2020). His influence is also an antithesis of Objectivism. As a teenager, I attempted to read his Critique of Pure Reason for that reason, as I’ve long believed that pursuing truth and intellectual honesty requires reading both (or all) sides of a debate. Unfortunately, a combination of limited time, a limited understanding of philosophy, and Kant’s opaque language prevented me from making much headway. I felt that either his diction or perhaps archaic and stylized linguistic conventions greatly obscured the meaning of his words (of course, it’s also a translation). J. M. D. Meiklejohn, the translator of an 1855 edition made available by Harvard College’s library on Google Books, wrote the following:
[Kant] wearies by frequent repetitions, and employs a great number of words to express, in the clumsiest way, what could have been denounced more clearly and distinctly in a few. The main statement in his sentences is often over laid with a multitude of qualifying and explanatory clauses; and the reader is lost in a maze, from which he has great difficulty in extricating himself.
One of the most influential components of his Critique, which I did not absorb at that time, was what’s now known as the analytic-synthetic dichotomy. After reading Leonard Peikoff’s essay “The Analytic-Synthetic Dichotomy” in its anthologized version, I returned to my Kantian research. This time, fortunately, I had many more resources available to aid me, in addition to the benefit of a better overall understanding of philosophy. I also had the opportunity to review some of the literature on the topic. I will present the concept in plain language, discuss subsequent references, briefly summarize Peikoff’s extensive response, and conclude with my insights.
The Dichotomy
It begins with Kant’s introduction of the paradigm. This analytic-synthetic divide was one of several interrelated dichotomies that he introduced, such as a priori versus a posteriori, intuition versus concept, and pure versus empirical knowledge. These dichotomies were presented in the framework of his transcendental doctrine (spanning aesthetics, metaphysics, and other philosophical categorizations, such as logic) in conjunctions with murkier discussions of space and time, which, for the present discussion, can be set aside for clarity’s sake. The theme across most of these dichotomies is an alleged schism separating that which is directly knowable through one’s senses (“intuition”) or which can be said to be true without reference to external evidence (by “general” logic), with that which is learned through experience, by synthesis, or extrapolation from the directly knowable (such as by “transcendental” logic). That brings us very close to a working definition of analytic and synthetic. Kant typically uses these terms within a more specific context, such as analytic principles, analytic conceptions, judgments, etc. In modern contexts, they’re often discussed as analytic or synthetic propositions. A plain language generalization of the terms is the following:
Analytic—that which can be determined to be true by the definitions of its components.
Synthetic—that which relies on external knowledge to be determined as being true.
I use the terminology “that which” due to the aforementioned diversity of context in which the terms are applied. However, a helpful simplification is to assume the context of language—analytical or synthetic sentences or propositions. Both the analytic-synthetic distinction and philosophy, quite broadly, are surprisingly highly correlated with language. Language is the means by which we express ideas and judge their validity. For the most part, the applications of these terms beyond language flow from its application to language or, at least, are analogous to such application (although Carnap questioned the usefulness of the application of epistemological models from language to science). In the linguistic context, we can restate the plain language definitions with greater specificity:
Analytic—sentences whose accuracies are discernible based on the definitions of the words contained therein.
Synthetic—sentences whose accuracies necessitate external knowledge for evaluation.
As is customary, I’ll illustrate the dichotomy with some example sets of propositions:
Group A
All dachshunds are dogs.
All dogs who bark are dogs.
All dogs who bark make noise.
Group B
All dachshunds are cute dogs.
All dogs go to heaven.
All dogs who make noise are hungry.
Clearly, those in group A are (considered) analytic, while those in group B are synthetic. While a priori and a posteriori are widely used in science (such as distinguishing what one might conclude in the absence of experimental data versus after having collected such data), I have not been able to verify significant instances of the analytic-synthetic dichotomy itself being applied in other fields. A lack of application could be taken as a sign that the dichotomy lacks usefulness that would justify its perpetuity in philosophy. Yet its influence within theoretical philosophy has been nontrivial, for better or worse. The prominent philosopher Rudolf Carnap (1891–1970), associated with logical positivism, relied upon the dichotomy in his principle of tolerance, which asserts that no language is “correct” and attempts to lay the groundwork for philosophical accounting for the differences in scientific inquiry introduced by language. It holds that changes in language affect analytical but not synthetic statements.
Critics
The American naturalist philosopher Willard Van Orman Quine (1908–2000) notably opposed the dichotomy in his essay “Two Dogmas of Empiricism” (1951). His philosophical orientation was described as naturalism because it had its basis in science (primarily natural science, although he was careful to utilize a definition of the term science that included social and other softer sciences, such as psychology, sociology, etc.). He described his philosophical perspective as “science itself, and not in some prior philosophy,” saying “that reality is to be identified and described” (Hylton and Kemp 2023, Quine 1951).1 He saw the dichotomy as a false distinction and advocated for a more “thorough pragmatism.”
Quine made a holistic argument saying, among other things, that (a) our beliefs are parts of an interconnected whole, inseparable into isolated analytic and synthetic tokens, and (b) that the distinction is false because all knowledge requires, or at least can be called into question by, empirical evidence. Holism itself is not considered controversial and was accepted by Carnap. It suggests that the larger context of a sentence (within a block of text, a theory, or a component of an argument, etc.) influences the extent to which it depends on external evidence and the nature of that evidence. Where Carnap’s Tolerance holds changes to analytic statements as a matter of language and unanswerable to experience, Quine’s point was that no such difference exists as even supposedly “analytic” statements may be reasonably brought into question as they pertain to a larger context (e.g., scientific theory). Quine’s point can be obfuscated by the commonly simple examples of analytic and synthetic statements, such as those I gave above. In his 1962 article “The Analytic and the Synthetic,” Hilary Putnam, a thoughtful proponent of the dichotomy’s existence (with ambivalence as to its importance), emphasized Quine’s holistic point following the publication of what he viewed as baseless knee-jerk reactions against it:).
[L]aws of Euclidean geometry were, before the development of non-Euclidean geometry, as analytic as any nonanalytic statements ever get, I mean to group them, in this respect, with many other principles: the law “f = ma” (force equals mass times acceleration), the principle that the world did not come into existence five minutes ago, the principle that one cannot know certain kinds of facts, e.g., facts about objects at a distance from one, unless one has or has had evidence. These principles play several different roles; but in one respect they are alike. They share the characteristic that no isolated experiment (I cannot think of a better phrase than ‘isolated experiment’ to contrast with “rival theory”) can overthrow them. On the other hand, most of these principles can be overthrown if there is good reason for overthrowing them, and such good reason would have to consist in the presentation of a whole rival theory embodying the denials of these principles, plus evidence of the success of such a rival theory.
Objectivism also rejects the dichotomy. Ayn Rand, and to a much greater extent, Leonard Peikoff, denounced it as not only useless but also actually damaging to the development of epistemology. Rand wrote that the dichotomy was an assault on cognition, “widening the breach” between “mind and reality” (Rand and Peikoff 1990, chapter 8).2 She described it as a “grotesque” device, creating an artificial separation between “factually” true statements and “necessarily” true ones. Although scathing, Rand’s rebuke was brief. Peikoff’s criticisms were much, much more extensive. Peikoff’s comments on the topic span over half a century, making him the most well-versed expert on the history of this crucial component of Kant’s seminal work. His essay “The Analytic-Synthetic Dichotomy” was first published in The Objectivist in 1967. It was later republished as an additional component to Introduction to Objectivist Epistemology, which is the version used in my research. The treatise is approximately eleven thousand words long and deconstructs both the dichotomy and its implications to the finest grain, detailing every aspect of its interpretation. His conclusions leave no room for ambiguity: “Objectivism rejects the theory of the analytic-synthetic dichotomy as false—in principle, at root, and in every one of its variants.” The most self-sufficient examples of so-called analytical statements, like “All dogs who bark are dogs,” rely on the law of identity (they’re tautologies). These convey the least possible information. Peikoff alludes to this by quoting Ludwig Wittgenstein: “The propositions of logic all say the same thing: that is, nothing.” Thus, we’re being handed a dichotomy between statements that say “nothing about that which exists” in the analytical case and those that “cannot be proved” in the synthetic. Put simply, the only informative statements are synthetic, so only one-half of the dichotomy has any importance, which means it’s a false dichotomy or one that’s so broken that it’s useless (at best). This component of Peikoff’s reasoning is aligned with Quine’s and even Putnam’s.
Where Quine’s analysis was part of his career-wide “ode to science,” meticulously making the case that all of philosophy is and should be a sub-component of science, Peikoff’s objections were driven by concern for the deleterious impact of the dichotomy’s acceptance on human cognition, which he compared to a plague that attacks cognition. This is made clear in his exploration of the dichotomy through the lens of conception. A formulation of the definition of an analytical proposition is “one which can be validated merely by an analysis of the meaning of its constituent concepts.” Peikoff points out that this begs the question of what defines a concept—is it the parts of existence (existents) it encompasses and, therefore, all of their characteristics? Or does a concept contain some of its existents’ concepts, but not others?
Peikoff tells us that the latter definition is fundamental to the dichotomy and stems from the Platonic realist theory of universals and its related essence-accidents dichotomy, based on mysticism. Some characteristics, it says, are essential (supernatural) and accidental (worldly). Nominalists have the same approach but re-framed to substitute man’s subjective social assignment of essential characteristics in place of divine origin. Objectivism, on the other hand, asserts that concepts are open-ended classifications based on “observed similarities which distinguish them from all other known concretes.” Quoting Rand, concepts “represent classifications of observed existents according to their relationships to other observed existents.” According to Objectivist thought, the essential (defining) characteristics of existents comprising a concept are determined contextually. Classifications of the existents into concepts are “condensations of data” that must satisfy the logical requirement of non-contradictory identification (Peikoff 1993, 112–118)3.
The Objectivist treatment of concepts bears a striking resemblance to how data scientists analyze text corpora to determine the most predictive terms in component documents, which in most contexts are highly correlated with or identical to those with the most importance. The methodology is called “term frequency / inverse document frequency” (TF-IDF). TF-IDF is calculated by multiplying a term’s frequency in a document by the inverse of its frequency across all documents. The idea is that terms that appear often in a given document but rarely in others tell you the most about what distinguishes that document. These frequent, distinguishing terms are like the similarities of existents that distinguish them from other concretes. It’s an incredibly importance component of “text mining,” and it’s one of the foundations of natural language processing (NLP). NLP is the field responsible for language models, including ChatGPT, Gemma, and Llama. TF-IDF’s origins in information retrieval date back to the 1970s (Peikoff’s and Rand’s comments predate it).
Words, Peikoff says, are used as symbols to represent concepts with no independent meaning. Concepts combine mental units (existents) with the same characteristics without reference to the measurements of those characteristics. As we learn more about the world, we add to the list of characteristics subsumed by our concepts. His example is that a child may begin with the knowledge that human beings are animals who are rational and then later learn of the distinction between “living” and “inanimate,” at which point the child associates “living” with “human” (although I must humorously note that this is a strange kid, who understands the rationality ahead of the fact that some things are alive). The concept’s meaning “consists of the units it integrates, including all the characteristics of these units.”
This understanding of concepts is powerful. It shows us that there’s no distinction between analytic and synthetic. Both rely on the law of identity. If the predicated characteristics are essential, then they’re included in the concept. Peikoff’s example was that both the claims that humans are “rational animals” and “have two eyes” are true by the definition of “human” if they are essential characteristics of humanity. We learn the essential characteristics contextually through experience.
Epistemology’s single basic dichotomy, Peikoff concludes, is truth versus falsehood. False and distracting alleged conflicts between logic and experience erode our understanding of metaphysics, epistemology, ethics, etc. His conclusions section expounds on the absurdities resulting from the analytic-synthetic and the practical ways it has damaged the five branches of philosophy.
Observations
Poso Capitalism could be described as applied philosophy, although it cuts across other fields like economics and, within philosophy, is mostly focused on ethics and politics. From this applied perspective, it seems that three questions should be asked regarding the dichotomy:
Is the dichotomy useful?
Is it harmful?
Is it completely false?
Peikoff, Quine, Rand, and an application of common sense all lead to a clear conclusion that the dichotomy is (at least) useless. If philosophers find this view controversial, then I would be both surprised and inclined to request that they search for evidence supporting their perspective in the form of practical applications within quantitative research. Based on the extent to which it has diverted philosophers’ attention and confounded the ability to equate facts and logic with truth, as Peikoff illustrated in his opening example of debating a philosophy professor on the topic of monopolies4. In this way it’s actually a threat to human progress.
In fact, he made compelling arguments for the analytic-synthetic divide being entirely false. Peikoff’s points regard:
The mutual dependence of the analytic and synthetic on the law of identity.
That predicated characteristics are essential and are, therefore, part of the concept itself.
To use an extreme example—it seems like a basic statement of identification such as “A is A” could be conceptually distinguished from highly empirical statements like “polymorphic malware evades intrusion prevention systems with an average of 37% greater success relative to traditional malware, assuming equivalent payloads.” Such was Putnam’s reaction to Quine, but then again, perhaps I’m undervaluing Peikoff’s point. I can see that the statement regarding malware also depends on the law of identity—the factors making polymorphic viruses more effective are characteristics of the concept that subsumes them. It’s less obvious that “A is A” could ever be answerable to empirical evidence, but this leads back to the practical focus of Poso as well as a comment Putnam made regarding the confusion caused by overly simplistic examples of analytical statements.
Given that the dichotomy is useless and clearly harmful, it’s hard to care whether or not there are “toy” examples of analytical statements where it might hold. That would be non-informative. If there are, it doesn’t imply that this separate, non-empirical source of truth can ever conflict with that which is known to be true by experience and empiricism. Truth is universally consistent. If I’ve failed to fully appreciate the strength of the argument that there is absolutely no dichotomy that exists under any circumstances—or if viewing them through a sufficiently holistic lens, wherein I would agree that the distinction is not only useless, but false—then this still seems to be an unnecessary fact, given that the knowledge that two truths can never truly conflict, that the dichotomy is useless, and that it has led to damaging, false epistemological beliefs.
Peikoff made these points saliently and his ultimate conclusion that the only dichotomy that matters is the dichotomy between truth and falsehood is absolutely spot-on. While it may be the one true epistemological dichotomy, this critical divide has “extensions.” Those extensions are not dichotomies of statements or “types of truth,” as the analytic-synthetic paradigm was once purported to be. What, instead, stems from the realization of an objective reality— that is, the universality of truth—are clearly defined values and an understanding that you not only can, but must, rely on reason as the instrument to identify truth. It is the faculty of reason applied to knowledge that discerns truth.
P. Hylton and G. Kemp, “Willard Van Orman Quine,” eds. E. N. Zalta and U. Nodelman, The Stanford Encyclopedia of Philosophy (Fall 2023), Metaphysics Research Lab, Stanford University, https://plato.stanford.edu/archives/fall2023/ entries/quine/.
A. Rand and L. Peikoff, Introduction to Objectivist Epistemology: Expanded Second Edition, eds. H. Binswanger and L. Peikoff, expanded 2nd ed. (New York, New York: New American Library, 1990). Students may request a free digital copy of the book from the Ayn Rand Institute at https://AynRand.org.
Peikoff, Objectivism: The Philosophy of Ayn Rand (Penguin, 1993).
The smoking gun was the professor’s response, which indeed indicated a notion that logical truth and factual truth might exist and separate conflicting entities. This does much to answer the layperson’s inevitable question, “Who cares?” The idea that those we trust the most to understand the truth and the mechanics of its pursuit could become so disoriented is chilling.
Having said that, my inner economist found both parties’ understandings of monopolies to be skewed. In fairness, almost no context was given and I have not read a full transcript of the debate in question. Government interventions may certainly lead to monopoly, to Peikoff’s point. However, monopolies can also occur naturally (traditional examples are power and water supply, although in recent times, technological and business developments have started reintroducing competition to these markets). Socialism and communism are also far, far more correlated with coercive monopolies than Capitalism, the latter system being built on an assumption of many buyers and sellers (an assumption that should be protected outside of edge cases like natural monopolies).
Note that some of the full format references are only available in the book. You may ask me if you need a reference clarified.