Remarks On Epistemology: Preamble
Remarks On Epistemology: Preamble
Remarks On Epistemology: Preamble
Preamble
Philosophers say all kinds of high-minded things about knowledge, and create many abstract
epistemological systems which do not seem to impact their lives. Bemoaning this armchair theorizing
is a favorite pastime of people who make their living doing it. Indeed, a cursory glance at philosophical
literature will show no end of complaints about this. The response given by contemporary academics is
to align theory with concrete praxis, which is academic moon-speak for talk about knowledge in a
way that is based on how we actually deal with facts and opinions and what-not. This shows a
commendable drive toward epistemological discussion that isn't just a dead, isolated set of academic
theories, but something else something with transformative power.
Unfortunately, the drive to align theory with concrete praxis translates, at present, into a drive to
grind an ideological ax. One hears a bit of lip-service to the power of reason, or perhaps a dubious re-
definition of objectivity to something that has nothing to do with objectivity, and even then only
from the more conservative professors.
I have only one remark to make here, and that is this: people with a background in academic
philosophy will probably find some of these remarks strange, backward, laughable, trivial, or ignorant.
This is due to the fact that I do not hold certain ideological prejudices common in the academy.
Academics take some things for granted that I find absurd. Keep that in mind, and approach this essay
on its own terms.
1 Reason 'capital-R' Reason is the ability to distinguish truth from falsehood. Reason is the
cardinal epistemic virtue. I shamelessly and naively assert that there is such a thing as truth
and that, ceteris paribus, it is better to know the truth than not. There is no non-circular
way for me to prove this, nor should there be; it ought to be obvious.
2 Epistemology is not about figuring out what knowledge is. We already all know what
knowledge is, as evidenced by the fact that we all know things, and we all know that we know
things, and we can all use the word know without recourse to a philosophical dictionary. And
this holds despite centuries of failed attempts by philosophers to precisely define what
knowledge is with an all-encompassing definition that captures every possible case. We know
such a definition is impossible (or at least, unfeasible) for the same reason we know that we
can't build perpetual motion machines; it's been tried thousands of times and never worked. And
we never needed perpetual motion machines, either.
(In the Theaetetus, three definitions of knowledge are rejected as wind-eggs, including the
definition of knowledge as justified true belief. In the 21st century, philosophers propound the
Gettier problem against that definition of knowledge, still arguing about it. In the silence after
the last argument is made, a soft echo is heard Plato's exasperated laugh, a guffaw mixed with
a sigh.)
Instead, epistemology ought be an approach to life that pays off by making us better at telling
truth from falsity, and I mean this about individual people. These remarks will make no attempt
to define knowledge, but rather to characterize it. One can say things about knowledge, how
knowledge works, and yes, what knowledge is, without providing an exhaustive definition.
3 All of our knowledge about the world outside of us is ultimately based in our own experiences;
this is a truism, and one that philosophers have stretched to absurd lengths. And yet, as soon as
we leave the armchair, our epistemology goes out the window and we're back to accepted
popular wisdom. One sees this even among philosophers who never shut up about the
importance of living a life transformed by philosophy. One is tempted to ask whether this
transformation is accomplished by means of tenure and paychecks.
And yet, there is a neglected pragmatic question here: to what extent do I trust my own
judgment, given that it forms the ultimate basis of my knowledge? I apprehend among educated
people an unspoken attitude that individual humans are so prone to cognitive distortion that one
ought to trust one's own judgment only to the bare minimum needed for everyday life, unless
one is an expert in a particular field. Otherwise, one ought to defer to the relevant authorities, of
which modernity provides no shortage. I would like to challenge this idea. It presents itself as a
responsible self-awareness of one's limitations, but it is really an abdication of responsibility, a
kind of epistemic false modesty. Believe what the physicist says about physics, but believe it
because the physicist's theories play a role in creating technology, and that technology works,
and you can see this with your own eyes. But beware when society creates experts claiming to
know things without producing any evidence that they are true. The job of these experts is to
maintain an illusion, not because they are ordered to by some shadowy figure, but because their
fields are products of collective foolishness.
4 The first question here is a basic logical one. What is the reason for the implicit distrust of one's
own judgment? How do you know that humans are so prone to cognitive distortion that you
ought not to trust your own judgment on matters larger than your own life? If you defer to the
experts here, you're caught in a circular argument just as soon as you are asked to produce
justification for deferring to the experts. And if you don't, you're caught in self-refutation.
This is not an airtight argument, of course. There are a variety of ways that one can weasel out
of it. However, they fail, not in a strict logical sense but in the sense that they fall short of what
they are meant to do. One could, for example, appeal to a spectrum of trust, trusting one's own
judgment only up to a point. Or, one could claim that there are certain subjects where one's own
judgment is trustworthy, and certain subjects where one's own judgment is not. But notice that
any such response is still beholden to the dilemma set out in the preceding paragraph. If you
appeal to a spectrum, what is your reason for drawing the line where you do? Do you appeal to
the experts and fall afoul of circularity, or to your own judgment? Note that this question
applies even if the line is blurry. If, on the other hand, you claim that some subjects are
suitable for one's own judgment, and some are not, then one may ask how you derive the
criteria for classifying such subjects, and you are faced with the same problem as before; to
whom do you appeal?
Now, it may be objected in both of these cases that, perhaps, a sufficiently sophisticated system
could be consistent and non-circular on this matter. Perhaps the spectrum theorist could find a
reason for drawing the line that strikes a comfortable balance between one's own judgment and
that of the experts, and perhaps the person who favors classifications could do the same. But
anything that avoids circular appeal to the experts in order to justify the experts is then beholden
to individual judgment. And if it's beholden to individual judgment, then it is possible to
construct an alternate account that contradicts it by simply changing one's judgments.
5 One reply here, of a kind that is presently in vogue, is an appeal to the fact that humans are
social animals. We're embedded in a social context, and many of our putatively individual
judgments are the result of our upbringing. This social retort just a special case of the same
kinds of arguments made in section 3, and falls prey to the same problems. But social
arguments of this kind are privileged with unwarranted rhetorical force because they are
fashionable, so I will treat the matter specifically.
Social context may provide an interesting study in the chain of events that led to my holding the
beliefs that I do, but what grounds does it provide for doubting my judgment?
6 The fact that a baseball pitcher's curveball really does curve as it travels through the air was not
scientifically and rigorously documented until 1959. Before that, there was a great deal of
debate over whether or not curveballs really followed a curved path, or whether it was an
optical illusion (there is, incidentally, a minor optical illusion involved that exaggerates the
curve, but there is also really a curve). A famous baseball player of the era commonly known as
Dizzy Dean was quoted thus: Stand behind a tree 60 feet away and I'll whomp you with an
optical illusion.
Nevertheless, Life magazine published an article in 1941 claiming that the curveball was an
illusion. The magazine even used what was then a new technology to prove this: high-speed
photography.
Someone living before the experiments that verified the existence of the curveball would have
faced a dilemma. Which experts would he trust? The baseball players, or the photographer? One
could argue that the baseball players had more experience with the phenomenon in question, or
conversely that the players were deceived by an optical illusion and being stubborn about it, and
that the whiz-bang new technology of high-speed photography offered proof. We choose our
experts.
6.1 A metaphor that one sees in epistemology is that of the crossword puzzle. Our body of
knowledge is like the crossword puzzle, with each of the things we know implying other
things in a complex web of relationships. The words in the puzzle interlock in a cohesive
way. Or at least, they do in the dreams of philosophers.
Consider a crossword puzzle with a number of places where the puzzle doesn't add up to a
coherent word, or adds up to wrong word, or is incomplete basically, places where the
puzzle isn't right. We'll call these tension points. Consider a tension point where a word is
spelled out that does not match the contextual hint given for that word in the puzzle
instructions. The person doing the crossword puzzle has a few different options.
First, the puzzle-taker could conclude that this word is wrong, and that some of the other
words must be changed. The tension point resolves to the context. That is to say, some of
the words around the tension point are slightly adjusted or re-written to eliminate the tension
point. This is like the following case: I think that I did not smoke my last cigarette, and that
nobody else did, leading me to conclude that I have one left. I open the pack, and find it
empty. I must either change my belief that I smoked my last cigarette (perhaps I forgot), or
that nobody else did (perhaps someone took it). The tension point resolves to the context;
my belief that I have a cigarette left must change, and thus, so do any other beliefs that
imply that, such as my belief that nobody took my last cigarette. That italicized portion is
important; one typically gets to a false belief by means of a defect in one's other beliefs,
much as one typically gets a word wrong in a crossword puzzle by means of writing a
wrong word somewhere else.
6.2 It is instructive to see where the crossword metaphor breaks down. In a crossword
puzzle, the crossword hints are not part of the puzzle, and their interpretation only admits of
a certain amount of ambiguity. But imagine that the hints were somehow part of the puzzle,
changing with the puzzle, and likewise for the interpretation of the hints.
6.3 Suppose I have an acquaintance call him Bob who claims to be a surgeon with a
sizable income. I know that Bob drives an expensive car and has a large house, but I am not
close to him. However, after knowing Bob for some months, I hear from a mutual
acquaintance that Bob is not really a doctor, but a low-level clerical worker with a very
modest salary, and has been arrested for the distribution of cocaine. This mutual
acquaintance is a person whom I trust. In this case, many of my other beliefs about Bob
change, and my interpretation of various facts about him also change. He drove an
expensive car, not because he was a doctor who made enough money to afford it, but
because he dealt cocaine on the side. This is a case of a large body of my knowledge about
something changing to reflect a single tension point. The context aligns to the tension point,
which is to say that the entire body of my knowledge changes to suit my newfound
knowledge of who Bob really is. It is as if I were completing a crossword puzzle and had to
re-write a mis-spelled word. I know that the revised word is correct, but the revised word
does not fit any of the words around it, so the entire puzzle must be re-written to suit the
new correction.
7.2 If you'd just think for yourself, you'd agree with me!
7.3 If I can't see it, it doesn't work. I know this, because I've never seen anything work
without my seeing it. That little light in the fridge must always be on, because I've never
seen it off.
7.5 I didn't steal this money in my pocket. How would I even know if I did, maaan?
7.6 I don't know what the meaning of all this is, so there isn't one.
7.7 Transcendence is a security blanket for weak people. Thus do I sleep peacefully in the
knowledge that I'm as good as you.
8 Epistemic possibility is just acknowledged ignorance of a state of affairs. There could be
money in this wallet, one says without looking in the wallet. This is the same as saying, I do
not know if there is money in this wallet. Admission of this kind of possibility (as opposed to
modal possibility) is acknowledgment of ignorance.
9 What is the difference between failure to believe that p and disbelief in p? I propose that
the latter what I'll call active disbelief can be phrased in terms of the former, which I'll call
non-belief. Simply put, non-belief is the absence of a belief in something. If I fail to believe that
X is the case, then I do not believe X. I actively disbelieve X if and only if two things are true:
9.1 I know that I do not believe X to be the case.
9.2 I know that I do not believe that X is possibly (epistemic sense) the case.
10 When examining an argument, we may apprehend a confusion between the object of knowledge
and its means of acquisition.
11 Cognitive deficits in a person can keep us from acquiring knowledge this is uncontroversial.
The list of such defects is so long that I can't write it here, but I'll give you a small set of
examples to give you an idea of what I mean; cognitive biases (confirmation bias, negativity
bias) malignant defense mechanisms (denial, projection, rationalization, bowing to the crowd),
logical fallacies (circular argumentation, denying the antecedent, affirming the consequent,
overgeneralization), and character defects (deference to popular opinion, laziness, stupidity).
The main reason that so many people are confused is threefold, with the final reason proceeding
from the first two.
11.1 Time spent debating people will show you that cognitive deficits prevent us from
forming correct judgments in individual cases, which prevents us from forming a
coherent larger picture, resulting in an erroneous worldview, which maintains the
cognitive deficits that resulted in the errors in the first place. This is quite dense, so
some explanation is in order.
11.1.1 Human knowledge has a structure like a crossword puzzle, with each item of
knowledge supporting and supported by other items I alluded to this in sections 6.1
through 6.3. Let's push that analogy a little further. If every word is correct except for
one, the error will be obvious and thus, quickly corrected. In fact, this situation is
impossible in many crossword puzzles. However, imagine a crossword puzzle with
many errors, each of which supports, and is supported by, several other errors? Of
course, some words will be obviously wrong, so we try to correct them, but in every
case, there are some errors we are not willing to give up, because they are supported by
other errors that make them look correct.
11.1.2 Take a single obviously misspelled word, say Obnocious instead of
Obnoxious. The correct solution can't be reached from this stage because the errors in
the rest of the puzzle make the correct solution look wrong! So we pick a different
solution that looks plausible based on errors, mind you. Suppose we correct
Obnocious to Obnotious, keeping in mind that Obnoxious is out of the question
because the huge number of errors in the puzzle makes it impossible for us to use an x
there. We then make corrections to the other words around it, until we're forced to
egregiously mis-spell another word. Then we realize that we messed up, so we repeat
the process over and over, and never get anywhere.
11.1.3 The question arises of why we would hold on to errors so much. The cognitive
deficits I talked about earlier are critical here, because they are the first part of what
keep us in error, and social pressures take advantage of such deficits that's important
to remember. The second part of what keeps us in error is that our overall picture is bad.
11.1.4 This is where the analogy breaks down, so, to save the reader from leaping into
even colder and deeper waters of abstraction, I will alter the analogy. Suppose that, in
some weird way, the hints to the crossword puzzle are, partially at least, determined by
the content of the puzzle. If you change the words around, the hints also change, perhaps
because they refer to the answers to other hints or something of that nature. Moreover,
the entire topic, or theme, of the crossword puzzle is also partially determined with
reference to the hints, say, by filling in blanks. This creates a feedback loop that traps us
in a cycle of errors.
11.1.5 We could try, as Descartes did, to eliminate everything uncertain and rebuild
from the bottom up. But when we do this, we find that we are left with so little that we
can't even begin to rebuild, because none of our crossword puzzles starts out empty. A
different solution is necessary.
11.2 Recall the point about social pressures making it hard to let go of the errors. Here we see
the real despair in our situation, but also the great hope: a society of millions must include
all of its members to function, and they must cooperate, if only to a slight degree. This
means that people who are qualified to make particular judgments must, in some
cases, either change their judgments or lie about them in order to exist in that society.
This, too, requires some explanation.
11.2.1 It is a bit like the story of the Emperor's New Clothes. The young boy can see
that the Emperor is naked, as can everyone around him. But everyone believes that
everyone else can see the clothes, so they don't say anything. Only the child does. Now,
expand this idea to a society that includes all kinds of lies. Some, like the Emperor's
nudity, are blatantly obvious, like the obviously misspelled word in the crossword
puzzle. Some are implausible, but a foolish person may still believe them; we can
imagine a widespread official doctrine that the Emperor has a pet dragon in his
basement, disbelieved by most people in private but publicly professed by everyone. A
few gullible people may believe it, but most people just pretend to believe it while
secretly scoffing at it. Some are so plausible that only a very shrewd person could see
through them; we can imagine a widespread lie that the Emperor has blue eyes, although
they are really green, and most people have not seen the Emperor close-up, so they just
assume that it is true.
11.2.2 This analogy breaks down here, so the reader must make a leap of faith into a
higher level of abstraction. Suppose that these widespread lies (there could be hundreds)
are all mutually supportive of one another, so that belief in the more plausible lies imply
belief in the obviously false ones. Now it's almost impossible for anyone, even very
wise people, to come to correct conclusions about things, because they are deceived in
so many subtle ways. A very shrewd man may refuse to believe in the Emperor's
clothes, but find himself struggling to find a way not to believe in the Emperor's pet
dragon, because so many other plausible lies imply the lie about the pet dragon. Nearly
everyone is partially deceived, and anyone who believes none of the lies looks insane!
11.2.3 It gets worse when you look at what happens when people get together to try and
figure out the truth. You would expect to see that two heads are better than one, and that
the group of people would eventually muddle towards a solution. Alas, this is not the
case. The problem is twofold.
11.2.3.1 First, many of the widespread lies are disbelieved privately by many
people, but publicly endorsed by nearly everyone. So even if 9 out of 10 people in a
room don't believe some ridiculous statement about a pet dragon, they will all claim
to believe it in order to avoid social censure, because they think that everyone else
believes it. On top of that, if one person speaks up in disbelief, he may be shouted
down, perversely enough, by people who agree with him!
11.2.3.2 Second is a demonstration that ought to be visceral to the reader, because
it illustrates the truth of these words in action: there are more gullible people than
there are shrewd people. But, even though most shrewd people believe this, most of
them will, in polite company, pretend not to believe it. Sound familiar?
As a result, we see that, even when a large group of people gets together to reason things out, they'll
keep making the same errors.
11.3 There is another point here that is even more disturbing. It is very subtle, so I will give a
complicated example, and then explain it.
Imagine that there is a lie that is plausible enough to fool seven out of ten people. Imagine
that this lie states that the inside of the Emperor's throne room, which almost nobody has
seen, is made of solid equum-stercore, a precious metal that glows in the dark. Most
people believe it, but three out of ten know enough about metals or science or chemistry or
whatever to deduce that it isn't true. Furthermore, suppose that this lie implies certain other
lies that are more plausible say, that the Empire is fabulously wealthy and can afford the
expensive equum-stercore. We can further imagine that this is also a lie, because the
Empire, in reality, is in debt.
Now, imagine that you have one hundred people, and the seventy that believe in the equum-
stercore leave the area. The thirty people left over, none of whom believe in the equum-
stercore, will still pretend to do so. But even worse is this: suppose that half of these shrewd
people believe that the Empire is wealthy, and half know it is in debt. When one of them
argues that the Empire is in debt, he may be disproven by another shrewd person who
says, But if we're in debt, how can the Emperor afford the equum-stercore for his throne
room? The shrewd person arguing will not be able to advance his argument that the Empire
is in debt, because doing so will contradict the equum-stercore lie. And he is unwilling to do
so, for fear of social censure.
The point here is very subtle. Notice that half of the shrewd people in the above example are deceived
into believing that the Empire is wealthy, although it is really in debt. More worryingly, though, notice
how a lie that none of them believe prevents the half of the shrewd people who know that the Empire is
in debt from enlightening the other half and, chillingly, may cause them to doubt what they know
with good evidence, and actually lead them to consider believing that the Empire is wealthy, perhaps
even tempting them to believe in the equum-stercore. The ignorance of stupid people infects those
who share a society with them! This is how societies are dragged down to the lowest common
denominator, and one of the many things that eventually kills them.
12 There is a way to correct the bad crossword puzzle, but it can't be done by razing it to the
ground the way Descartes did. The way to correct it is to change the obviously misspelled
words to their correct spellings, and then ruthlessly eliminate all words that contradict them,
with deference to the constantly-corrected hints and to the few things we know for certain. Once
a certain tipping-point is reached, we are set on an inevitable course for the truth. This
requires great fortitude and, above all, great courage. And, as Socrates taught us, one can make
a rhetorical bargain to preserve a correctly-spelled word by making it seem compatible with
certain errors, knowing all the while that those errors are eventually doomed. But this requires
that your interlocutor be a lover of truth.
13 Let's make it a little more vivid: I have never even seen a raven, and I conjecture that they must
all be *white*. I go around and find a bunch of non-white objects that are not ravens. Hey, now
I've got empirical proof! A good way to approach the paradox may be to ask how someone
would refute me.