Michael Bergmann


What distinguishes Reidian externalism from other versions of epistemic externalism about justification is its proper functionalism and its commonsensism, both of which are inspired by the 18th century Scottish philosopher Thomas Reid.  Its proper functionalism is a particular analysis of justification; its commonsensism is a certain thesis about what we are noninferentially justified in believing.  The purpose of this paper is to introduce and motivate these two features of Reidian externalism.  I will begin by highlighting, in section I, the faults of alternative analyses of justification, noting how the proper functionalist element of Reidian externalism gives it an advantage over these competitors.  Then, in section II, I will explain how its commonsensism figures into an attractive response to skepticism. 


I. Analyses of Justification


Analyses of justification can be either internalist, externalist, or neither internalist nor externalist.[1]  When it comes to analyses that are neither internalist nor externalist, the only live option on the contemporary scene is a noninternalist version of evidentialism known as mentalism.  Thus, externalism can be effectively defended by objecting to both internalism and mentalism.  As for why we should prefer proper functionalism to other forms of externalism, there are two main considerations: (i) its superior way of handling an important objection to externalism (i.e., the New Evil Demon Problem) and (ii) the way it handles and is suggested by cases that are used as counterexamples to other analyses of justification.   I will argue against internalism and mentalism in sections I.A and I.C respectively.  And I’ll argue against the other main forms of externalism (i.e., those other than proper functionalism) in section I.B.  Then, in section I.D, I’ll defend the proper functionalist component of Reidian externalism.


A. Rejecting Internalism


An important contributing motivation for Reidian externalism is the hopelessness of internalism.  Of course, this motivation isn’t sufficient by itself because, as I’ve already indicated, there are other noninternalist positions besides Reidian externalism.  But once we see that the internalist answer is unsatisfactory, we can safely turn our attention elsewhere.

     We can start by considering what internalism is and why philosophers are attracted to it. What all forms of internalism have in common is that they require, for a belief’s justification, that the person holding the belief be aware (or at least potentially aware) of something contributing to its justification.  Thus, an analysis of justification counts as a version of internalism only if it endorses the following awareness requirement:

The Awareness Requirement: S’s belief B is justified only if (i) there is something, X, that contributes to the justification of B—e .g., evidence for B or a truth-indicator for B or the satisfaction of some necessary condition of B’s justification—and (ii) S is aware (or potentially aware) of X.


Why do internalists find it so tempting to impose the awareness requirement on justification?  We can see why by considering a prominent internalist complaint against externalist accounts of justification which don’t impose an awareness requirement:

Subject’s Perspective Objection (SPO): If the subject holding a belief isn’t aware of what that belief has going for it, then she isn’t aware of how its status is any different from a stray hunch or an arbitrary conviction.  From that we may conclude that from her perspective it is an accident that her belief is true.  And that implies that it isn’t a justified belief.[2]


Externalists say that what matters for justification is that the subject’s belief in fact has something going for it (such as being reliably formed), whether or not the subject is aware that it does.   The SPO says that all such views are mistaken since they don’t prevent it from being an accident from the subject’s perspective that her belief is true.  The SPO isn’t merely an objection to externalism.  It also constitutes the main motivation for internalism. For it makes clear why internalists think we should adopt the awareness requirement on justification described above. It seems that it is only by adopting that awareness requirement that one can avoid falling prey to the SPO.

    My objection to internalism is in the form of a dilemma.  Given that internalists endorse the awareness requirement, all internalists require for a belief’s justification that the person holding the belief has some sort of actual or potential awareness of something contributing to that belief’s justification.  All such awareness will either involve conceiving of the justification-contributor that is the object of awareness as being in some way relevant to the justification or truth of the belief or it won’t.  Let’s say that if it does involve such conceiving, it is strong awareness and that if it doesn’t, it is weak awareness.[3] Very briefly, the dilemma facing internalists is this.  If they require actual or potential strong awareness, then they are faced with vicious regress problems implying the radical skeptical conclusion that we are literally incapable of forming any justified beliefs.  But if they require only actual or potential weak awareness, then the SPO can be used against their view, in which case the main motivation for imposing an awareness requirement in the first place is lost.  Either way, the results are disastrous for internalism.

     We can lay out this dilemma more carefully as follows:

(I)      An essential feature of internalism is that it makes a subject’s actual or potential awareness of some justification-contributor a necessary condition for the justification of any belief held by that subject.

(II)    The awareness required by internalism is either strong awareness or weak awareness.

(III)   If the awareness required by internalism is strong awareness, then internalism has vicious regress problems leading to radical skepticism.

(IV)   If the awareness required by internalism is weak awareness, then internalism is vulnerable to the SPO, in which case internalism loses its main motivation for imposing the awareness requirement.

(V)    If internalism either leads to radical skepticism or loses its main motivation for imposing the awareness requirement (i.e., avoiding the SPO), then we should not endorse internalism.

(VI)   Therefore, we should not endorse internalism. 


Given that (II) is true by definition, the only questionable premises are (I), (III), (IV), and (V).  Of these, premises (I) and (V) are most plausible on their own and, therefore, least in need of defense.[4]  So I will focus here on premises (III) and (IV).

     In defending premise (III) I will make use of the distinction between doxastic and nondoxastic versions of strong awareness as well as the distinction between actual and potential strong awareness.  Doxastic strong awareness is strong awareness that involves the belief that the object of awareness is in some way relevant to the truth or justification of the relevant belief.  Nondoxastic strong awareness is just strong awareness that isn’t doxastic.  And an actual strong awareness requirement demands that the subject actually be aware whereas the potential strong awareness requirement demands only that the subject be able on reflection alone to be aware.  For reasons I explain below, it’s fairly easy to see that actual doxastic strong awareness results in regress problems and skepticism.  And it turns out that moving to merely potential doxastic strong awareness or to nondoxastic strong awareness doesn’t help matters.

     To require actual doxastic strong awareness for the justification of a belief that p (which we can call ‘Bp’) is to require that the subject believe, of something contributing to Bp’s justification, that it is in some way relevant to Bp’s justification.  Notice first that this additional required belief is more complex than Bp since its content includes p and more besides.  Notice second that it is implausible to require this additional belief without requiring that it too be justified (an insane or irrational or unjustified belief of this sort presumably wouldn’t help at all).  But from this it follows that to require actual doxastic strong awareness for a belief’s justification is to require a further more complicated justified belief.  And the justification of that further more complicated belief will require yet another even more complicated justified belief.  And so on.  The upshot is that for even the simplest belief to be justified, the subject must hold an infinite number of ever more complicated beliefs.  And given the difficulty we have in grasping the content of the fifth or sixth belief in such a regress, this makes the justification of even the simplest beliefs impossible for us.

Will it help to require only potential strong awareness rather than actual strong awareness?  No.  For although this will prevent us from requiring the subject actually to hold an infinite number of beliefs, it won’t prevent us from requiring, for the justification of even simple beliefs, that the subject be able on reflection alone to hold beliefs with contents that are far more complicated than we are able to grasp.  Will it help to require only nondoxastic strong awareness rather than doxastic strong awareness?  No.  Any kind of strong awareness of a justification-contributor for a belief B involves conceiving of that justification-contributor as in some way relevant to B’s justification.  Thus, nondoxastic strong awareness involves at least concept application, even if it doesn’t involve belief.[5]  And, since such concept application can be justified or not, it’s important to require justified concept application (an insane or irrational or unjustified concept application wouldn’t be of any help).  But then here too we have a vicious regress of ever more complex concept applications, ones we simply aren’t capable of making, not even on reflection.[6]  Premise (III) thus seems true: requiring strong awareness leads to vicious regress problems and skepticism.

     Let’s turn next to premise (IV).  The point here is just that requiring only weak awareness won’t prevent a view from being vulnerable to the SPO.  Suppose someone had an externalist view according to which it is necessary and sufficient for the justification of a belief that it be produced by a reliable belief-forming process.  And suppose that Jack’s belief B is produced by a belief-forming process token of a relevant process type that is, in fact, reliable (call this process token ‘RP’).[7]   Now imagine that a proponent of this reliabilist view sees that the SPO counts against her position and takes that to be a good objection.  She decides to add to her account of justification the requirement that S has a weak awareness of the reliable process token in question—in this case RP.  Will that pacify those who endorse the SPO?

     It shouldn’t.  For Jack can satisfy a weak awareness requirement without conceiving of RP as being in any way relevant to the justification of his belief B.  But then, according to the SPO, even if this added requirement were satisfied, it would still be an accident from Jack’s subjective perspective that B is true.  For even if Jack applies a concept to RP, he doesn’t apply the right sort of concept to it.  He doesn’t apply a concept that involves his conceiving of RP as contributing in some way to B’s justification.  The only way to guarantee that he does apply such a concept to RP is to have B satisfy a strong awareness requirement.  Thus, we are forced to concede that premise (IV) is true too: by imposing only a weak awareness requirement, the internalist is vulnerable to the SPO and, thereby, loses the main motivation for her view.

     In light of the above defense of the premises of the dilemma for internalism, the Reidian has good reason to reject internalism and look elsewhere for a plausible theory of justification.[8]


B. Reliabilism, Tracking Accounts, and Virtues Accounts


Having set internalism aside, we can narrow our focus to a consideration of noninternalist positions.  Prominent among these will be externalist positions, and prominent among externalist positions is reliabilism.  However, there’s a simple but powerful objection against reliabilist accounts of justification.[9]  They imply that the beliefs of victims of deceptive demons are unjustified simply because those beliefs aren’t reliably formed.  But it seems that if an evil demon victim were a human who (i) was recently captured and then fed the same input to her belief-forming systems as I have to mine and who (ii) formed in response to this input the same beliefs I form, then her beliefs would be as justified as mine (which are, presumably, well-justified).  This is the New Evil Demon Problem and it afflicts not only reliabilism but also other accounts of justification such as tracking or virtues accounts.[10]  These other accounts are susceptible to the New Evil Demon Problem because they too wrongly imply that the beliefs of a demon victim of the sort described above are not justified.  In light of this objection, I will be setting aside these externalist accounts of justification along with internalism.[11]


C. Mentalism: A Noninternalist Version of Evidentialism


Evidentialism is commonly viewed as an internalist position.  But I want to focus on a noninternalist version of it.  Evidentialists say that a belief is justified if and only if it fits the subject’s evidence.  However, according to mentalism, a noninternalist version of evidentialism, the subject’s evidence consists of the mental states of the subject, whether or not the subject is aware or even potentially aware of those mental states.  So a person’s belief can be justified, according to this view, even if the subject is neither aware nor potentially aware of the mental states that count as evidence for it.[12]  For this reason, it is not an internalist position (which isn’t to say it is an externalist position).[13]  Mentalism thus avoids both the dilemma for internalism and—given that my demon victim twin has the same evidence I do—the New Evil Demon Problem.

     In order to understand evidentialism, including the mentalist version of it, we need to understand the way its proponents view the relation of fittingness that holds between a belief and one’s evidence.  We can capture this with the following thesis: 

Necessity: The fittingness of a doxastic response B to evidence E is an essential feature of that response to that evidence. 


Given Necessity, it is impossible for B to be an unfitting doxastic response to E (though it may be an unfitting response to a more extensive evidence base that includes E).[14]  Thomas Reid’s work suggests a counterexample to Necessity, one involving an alien cognizer whose cognitive equipment works quite differently from our own.  Before considering that alien cognizer, let’s focus on an example involving a human whose cognitive faculties will be contrasted with the alien’s.

     The cognitive equipment of the human in question works as follows.  Upon grabbing a billiard ball, this human has a tactile experience (which we can call ‘E1’) that is phenomenally just like the one we ordinarily have when grabbing a billiard ball.  In addition, upon grabbing the ball and experiencing E1, the human in question feels strongly inclined to take E1 to be indicative of the truth of the first person belief B1: “There is a smallish hard round object in my hand”.  We can call this strong felt inclination a ‘connector’—or, more specifically, ‘connector C1’—because it connects E1 with B1.  It seems that for humans, B1 is a fitting doxastic response to evidence consisting of E1+C1.[15]  It also seems that B1 is an unlearned doxastic response to E1+C1—unlearned in the sense that it is natural and automatic, not something learned through induction (i.e., the human didn’t first learn independently, using something other than the sense of touch, that experiencing E1+C1 typically went along with the truth of B1).[16]  Moreover, it seems that C1 is an unlearned connector, a strong felt inclination that occurred naturally without first having its reliability independently verified.  This gives us an example of a fitting unlearned doxastic response for humans.

     Now consider an example of an unfitting doxastic response for humans.  E2 is an olfactory experience of a meadow full of flowers.  C2 is a strong felt inclination to take E2 to be indicative of the truth of B1.  Suppose that upon grabbing a billiard ball, a human experiences E2 and in response forms B1.  That would be an unfitting unlearned doxastic response.  And throwing in an unlearned connector C2 won’t make it fitting.  (The mere fact that—due to C2—this unfitting belief response seems just fine to the person doesn’t make the belief response fitting and, therefore, justified.)  Thus, B1 is an unfitting response for humans not only to E2 but also to E2+C2.[17]

The Reidian counterexample to Necessity focuses on an alien cognizer whose cognitive faculties works differently than ours.  For this alien cognizer (and other members of its kind), the natural and automatic response to grabbing a billiard ball is to have both experience E2 and the connector C2.  All members of this species have always had this same response naturally and automatically without first independently learning that it is reliable.  Moreover, all the behavior that in us produces tactile experiences of various sorts produces in them what we could call ‘olfactory experiences’ of various sorts.  Likewise, beliefs that are produced in us by tactile experiences of various kinds are produced in them by olfactory experiences of various kinds.  In short, the functional role played by tactile experiences in us is played in them by experiences phenomenally just like our olfactory experiences.  As Reid emphasizes, there’s nothing about tactile experiences that makes them especially suited to prompt beliefs in, for example, the hardness of bodies.  These experiences don’t resemble the hardness of bodies or logically imply it.[18]  It seems, therefore, that there could be cognizers in whom olfactory experiences play the same functional role (i.e., the role of mediating between physical touching and beliefs in hardness, shape, etc.) that tactile experiences play in us.

     What should we say, then, of a species of cognizers for whom the natural unlearned response to grabbing a billiard ball is to experience E2, have connector C2, and then form B1?  It seems we should say that for such cognizers, B1 is a fitting unlearned response to E2 and an unfitting unlearned response to E1. This suggests, contrary to Necessity, that the fittingness of an unlearned doxastic response is a contingent feature of it, a feature that depends in some cases on the species of the cognizer who has the response.  Necessity is, therefore, false and should be rejected.  And since mentalism, like other versions of evidentialism, endorses Necessity, we have reason to reject mentalism too.


D. Proper Functionalism


In addition to requiring the rejection of Necessity, this Reidian counterexample pushes us toward a proper function analysis of justification.  For it suggests that the fittingness of a doxastic response depends, in some cases at least, on the species of the cognizer who has it.  What is it about the species of a cognizer that determines such fittingness in those cases?  The answer that immediately suggests itself is that what makes a belief a fitting unlearned doxastic response to an experience has to do with the way the belief-producing faculties of the cognizer in question are supposed to function.  For clearly that is something that can vary from species to species.  Our cognitive faculties are supposed to function so that when we experience tactile sensation E1, our unlearned doxastic response is B1.  Not so the alien cognizers described above.  Their faculties, we may assume, are supposed to function in such a way that when those cognizers experience “olfactory” sensation E2, their unlearned doxastic response is B1.  The sense in which our faculties and theirs are supposed to function in the ways just specified is the same as the sense in which our hearts are supposed to function so that they beat less than 200 times a minute when we are at rest.  And the ‘supposed to’ of heart function is clearly connected with the notion of proper function or healthy function. 

     This suggests that the fittingness of a doxastic response to evidence is contingent upon the proper function of the cognitive faculties of the person in question.  And this, in turn, suggests that the evidentialist claim that “a belief’s justification depends on its being a fitting doxastic response to one’s evidence” could be illuminatingly improved if it were altered to say that “a belief’s justification depends on its being a PF-induced doxastic response to one’s evidence” (where a PF-induced doxastic response is one produced by the proper functioning of the subject’s cognitive faculties).

     I’ve argued elsewhere (2006b: ch. 5) that this move from evidentialism to proper functionalism should culminate in the following analysis of justification:

JPF: S’s belief B is justified iff (i) S does not take B to be defeated and (ii) the cognitive faculties producing B are (a) functioning properly, (b) truth-aimed and (c) reliable in the environments for which they were “designed”.[19]


I’ve explained above the motivation for clause (ii)(a).  The other clauses are added to handle certain counterexamples.  Clause (i)—the no-believed-defeater clause—is added to handle the case of a designer who creates cognizers to form beliefs in a reliable way but also to take each of their reliably formed beliefs to be defeated.  If these cognizers were also designed so that, although they took their beliefs to be defeated, they ignored this fact and continued to hold them, then it seems their beliefs would not be justified.  This shows that proper function isn’t enough for justification.  A no-believed-defeater requirement must be added.

     Clauses (ii)(b) and (ii)(c) are added to handle other counterexamples.  If a creator designed a cognizer’s faculties so that their intended purpose was to form beliefs that minimize psychological trauma (whether or not the beliefs were true), then believing in accord with proper function might in some sense be fitting, but it needn’t be epistemically fitting.  Hence clause (ii)(b).  And if an incompetent creator tried to design a cognizer’s faculties so that they formed mostly true beliefs when in their intended environment but the creator failed miserably (so that when the faculties operated in that environment as they were in fact designed, they produced mostly false beliefs), then, again, the beliefs in question aren’t epistemically fitting even though the faculties are operating as designed.  Hence clause (ii)(c).  Notice, by the way, that clause (ii)(c) manages to identify a connection between justification and truth without imposing a reliability requirement on justification.  The fact that the faculties are reliable when operating in their intended environment doesn’t entail that they are reliable.[20]

     Obviously this proposal needs further defense and I haven’t the space for that here.[21]  Nevertheless, we have seen in section I that we have some good reasons to reject several of proper functionalism’s main competitors: internalism, mentalism and the most prominent externalist accounts (reliabilism, tracking accounts, and virtues accounts). Because proper functionalism refrains from imposing an awareness requirement on justification, it avoids the dilemma that must be faced by all forms of internalism.  Because it rejects Necessity it avoids the problem that afflicts mentalism.  And because it refrains from imposing any sort of reliability requirement on justification, it avoids the New Evil Demon Problem which proves to be the undoing of standard versions of reliabilism, tracking accounts, and virtues accounts. Moreover, we’ve seen that the proper function view is suggested by the intuitions to which I appealed in considering the Reidian counterexample to Necessity.  Together, these considerations make a strong case for proper functionalism understood as JPF.


II. Commonsensism and Skepticism


The first element of the Reidian externalism I’m defending in this paper is the proper functionalism of JPF.[22]  As I already noted, that element consists of an analysis of justification.  The second element is commonsensism which is a particular thesis about what we are noninferentially justified in believing—a thesis which plays a prominent role in the Reidian externalist’s response to skepticism.  In this section, I’ll first lay out the Reidian externalist’s commonsensist thesis and explain how it can be used in responding to skepticism.  Then I’ll respond to two objections to that way of handling skeptical worries.


A. The Commonsensist Thesis


According to Reid, it is a first principle that our faculties are reliable.[23]  And first principles, says Reid, are properly believed noninferentially (2002: 452).  Just as we have noninferential knowledge about our immediate physical environment by means of sense perception and about our past by means of memory and about our own minds by means of introspection, so also we have a faculty by means of which we have noninferential knowledge of first principles.[24]  Reid thinks of first principles as self-evident truths.  He thinks some are contingent and some are necessary.  The one mentioned above (concerning the reliability of our natural faculties) is contingent.  And the faculty by which we know these first principles (whether necessary or contingent) he calls ‘common sense’.  Thus, Reid, as I understand him, disagrees with Alston’s conclusion (in his 1993) that one can’t know that sense perception is reliable without relying on sense perception.  According to Reid, one can know directly and noninferentially, via the faculty of common sense, that sense perception is reliable.[25]

     It is important not to be put off by Reid’s name—i.e., ‘common sense’—for the faculty by which we know first principles.  We tend to classify as ‘common sense beliefs’ beliefs that are peculiar to our own culture or upbringing.  Reid doesn’t—or at least he doesn’t want to.  His intention is to include only propositions that most everyone believes (and knows) noninferentially—things that are immediately accepted by sane persons once considered and understood.  That 2 + 2 = 4, that modus ponens is a valid form of inference, that the thoughts of which I am conscious are my thoughts, that I have some degree of control over my actions—these are examples of what Reid considers to be the dictates of common sense.  The first two are examples of necessary truths known by common sense; the latter two are contingent truths.  A more familiar name for the faculty by which we have noninferential knowledge of necessary truths is a priori intuition.  So the branch of reason Reid calls ‘the faculty of common sense’ encompasses both what we call ‘a priori intuition’ and something akin to it that produces beliefs in contingent rather than necessary truths.

     How exactly does this faculty of common sense work?  What is the process by which it leads us to beliefs in first principles?  Sense perception seems to work as follows: we have sensory experiences and, on the basis of such experiential evidence, we form noninferential perceptual beliefs.  Something similar can be said about the faculty of common sense:

We may observe, that opinions which contradict first principles are distinguished from other errors by this; that they are not only false, but absurd: and, to discountenance absurdity, nature has given us a particular emotion, to wit, that of ridicule, which seems intended for this very purpose of putting out of countenance what is absurd, either in opinion or practice. (2002: 462)


The suggestion seems to be that when we entertain the contrary of a first principle, we experience what Reid calls ‘the emotion of ridicule’.  On the basis of this experience we do two things: we dismiss as absurd the contrary of the first principle and we believe the first principle itself.  Thus, noninferential common sense beliefs, like noninferential perceptual beliefs, are based on experiential evidence.

     According to Reid, how is it that we come to know that our faculty of common sense itself is reliable? Recall that the first principle Reid mentioned earlier was that all our natural faculties are reliable; this includes the faculty of common sense.  Thus, as I understand him, Reid thinks that one noninferential output of the faculty of common sense is the experience-based belief that the faculty of common sense itself is reliable.[26]

     It’s important to recognize that Reidian externalists can endorse the commonsensist position I have in mind without defending all of Reid’s views on common sense.  One certainly doesn’t need to agree completely with Reid about which propositions are first principles; what matters is that we justifiedly believe in the reliability of our faculties in something like the way Reid suggests.  Nor does one need to hold Reid’s views on the details of how one comes to believe in the reliability of our faculties—details concerning whether we have a faculty of common sense employing the emotion of ridicule.  What the Reidian externalist is committed to is the claim that

CS: We noninferentially and justifiedly believe, perhaps on the basis of some sort of nonpropositional evidence, that our faculties are reliable. 


Can CS be taken seriously by sensible philosophers?  As a matter of fact, it is.  Indeed, it is part and parcel of the so-called particularist approach to epistemology which is currently quite popular among analytic philosophers.[27]  Those who employ it rely heavily on the seemingly noninferential knowledge they have of Moorean truths—truths such as that we aren’t the victims of massive deception about the external world or the past and that we aren’t brains in vats.  According to the particularist methodology, accounts of justification according to which it turns out that we aren’t justified in believing such Moorean truths are, thereby, disqualified.  The question of how it is that such epistemologists know these Moorean truths isn’t very often addressed.  But it seems that a natural answer is the Reidian affirmation of CS.[28]


B. Responding to Skepticism


The commonsense element of Reidian externalism provides the resources for an attractive response to skepticism.  The idea is that we have a belief source—which, for convenience, we can call ‘common sense’—that has, as noninferential outputs, higher-level beliefs which are justified (when they satisfy the conditions on justification identified in JPF).  Higher-level beliefs are beliefs about the epistemic credentials of one’s beliefs.  For example, my belief that my belief that I have a hand is justified is a higher-level belief.  So too is my belief that my belief-forming faculties are trustworthy or that my beliefs aren’t produced artificially by the Matrix or by an evil demon.  As with all forms of foundationalism, JPF doesn’t require for a belief’s justification that one have justified higher-level beliefs. Nevertheless, Reidian externalism acknowledges that humans often have justified higher-level beliefs and it offers CS as its primary way of accounting for them.  Moreover, it is by appeal to CS and the justified higher-level beliefs produced by common sense that Reidian externalism is able to address certain kinds of skeptical worries.

     For example, when a person considers the possibility that she’s the victim of a deceptive evil demon or that she’s a brain in a vat or that she’s hooked up to the Matrix, the justification of her beliefs may be in jeopardy.  For she has, at that point, only four options:

(a)     to believe she is not in a skeptical scenario

(b)     to believe she is in a skeptical scenario

(c)     to explicitly withhold judgment on whether she is in a skeptical scenario

(d)     to do none of the above.


If she takes option (b), she will have a defeater for her perceptual beliefs.  Likewise, if she takes option (c), she will also have a defeater for her perceptual beliefs.  (To consider whether a perceptual belief is produced by a deceiver and then explicitly withhold judgment on the matter seems to indicate significant uncertainty about the credentials of that belief; and the correct response to such uncertainty is to withhold the perceptual belief.)  To take option (d) seems to involve ignoring the challenge presented by one’s consideration of the skeptical scenarios.  And that seem to require you to stick your head in the sand and refuse to face a legitimate philosophical challenge. Thus, it seems that your only hope is to take option (a), thereby forming a higher-level belief.  But how can you form such a belief in a justified way?

     According to Reidian externalists, you can justifiedly form such a higher-level belief noninferentially.  They account for this by saying that such a belief is formed using the faculty of common sense.  People consider skeptical scenarios and ask themselves whether things might really be as described in the scenario.  When they do so, they find the suggestion utterly bizarre and absurd—not worth taking seriously.  So they reject it, believing its denial noninferentially on the basis of the experience associated with finding it to be an absurd suggestion.  So long as the noninferential higher-level belief formed in this way satisfies the conditions specified in JPF, it is justified and the skeptical worry is put to rest in a sensible way.  This is how the Reidian externalist uses her commonsensism in responding to skepticism.[29] 


C. Objection 1: Epistemic Circularity


The Reidian response to skepticism that I’ve just sketched might seem problematic.  After all, how can you be sure this new noninferential higher-level belief in the reliability of your perception isn’t itself produced by a deceptive demon seeking to prevent you from having such worries?  To invoke the Reidian reply again will presumably involve another belief produced by common sense, this time confirming the trustworthiness of the faculty of common sense itself.  (I already acknowledged this in explaining Reidian commonsensism above.)  But this gives us an explicit case of epistemic circularity—i.e., a case in which you are relying on a belief source to confirm the reliability of that very belief source.  This seems like relying on a witness’s testimony in court to confirm the questioned reliability of that very witness.  And that doesn’t seem to be a reasonable thing to do.[30]  So how can such epistemic circularity be viewed as acceptable by Reidian externalists?

     There is much to be said in response and, unfortunately, I don’t have the space here to say it all.[31]  But perhaps the most important thing to say is that there is a principled way to distinguish between malignant and benign epistemic circularity.  And once we make that distinction, we can see why concerns similar to those about the self-validating witness in court apply to malignant epistemic circularity whereas the Reidian externalist wants to defend only benign epistemic circularity.

     What exactly is epistemic circularity?  The first thing to note is that only higher-level beliefs can be infected with it.  Epistemic circularity arises in connection with beliefs about the trustworthiness or reliability of one’s own belief sources—beliefs such as “my belief source X is trustworthy” or “that belief of mine wasn’t formed in that unreliable way”.  Suppose I hold a belief—either inferentially or noninferentially—in the trustworthiness of one of my belief sources, X.   If, in holding that belief, I depend upon X, then that belief is epistemically circular.  (To depend upon a belief source X in holding a belief B is for B either to be an output of X or to be held on the basis of an actually employed inference chain leading back to an output of X.) 

     In order to distinguish malignant from benign epistemic circularity, it will be helpful to consider the following two kinds of situation in which a person can form epistemically circular beliefs (EC-beliefs) about a source X or a belief B:

QD-situations: Situations where, prior to the EC-belief’s formation, the subject is or should be seriously questioning or doubting the trustworthiness of X or the reliability of B’s formation.[32] 


Non-QD-Situations:  Situations where, prior to the EC-belief’s formation, the subject neither is nor should be seriously questioning or doubting the trustworthiness of X or the reliability of B’s formation. 


(To seriously question or doubt the trustworthiness or reliability of something is to question or doubt it to the point where one withholds or disbelieves the claim that the thing is trustworthy or reliable.)  Here’s an example of an EC-belief formed in a QD-situation.  Suppose that Tom—who has recently been persuaded, by some skeptical argument, to have serious questions about the reliability of his sense perception—is considering an argument that has been proposed to convince him that his perception is trustworthy after all.  And suppose that he can see that the argument is epistemically circular.  He will, if he’s sensible, consider this argument to be useless as a means to help him regain lost confidence in his perception.  The reason is simple.  Tom has lost his trust in his perception.  He is uncertain about whether it is reliable.  So long as that is the case, it wouldn’t be reasonable for him to depend on the testimony of perception to learn things.  What he should be looking for is some other testimony—some testimony that is independent of perception—that supports the reliability of his sense perception.  This is the sort of context in which epistemic circularity is a bad thing.

     But not all situations in which EC-beliefs are formed are like that.  Consider the following example of a non-QD-situation.  Becky has never had any questions or doubts at all about the trustworthiness of her sense perception.  In fact, she never considered the proposition that her sense perception is reliable before she came to believe, on this occasion, that her sense perception is reliable.  But although Becky has no questions or doubts about the reliability of her sense perception, she is curious about how she came to believe in its reliability.  She is wondering about the origin of this belief, not its legitimacy (about which, as I’ve already indicated, she has no doubt or uncertainty).  Now suppose she discovers that she formed her belief about the reliability of her senses in a way that involved epistemic circularity.  Since she wasn’t trying to allay her worries about this source by looking for some independent verification of its reliability, this discovery doesn’t give rise to any worries in her mind concerning the reliability of her senses.  Becky was merely curious about how it was that she came to hold the belief that her sense perception is reliable—a belief about whose credentials she has no questions or doubts.  Given that it’s not the case that this discovery should make Becky have serious questions or doubts about its reliability,[33] this is a non-QD-situation.  It’s a situation in which epistemic circularity doesn’t seem to be a bad thing.

     What all this suggests is that, although epistemic circularity is malignant in QD-situations, it is benign in non-QD-situations.  What explains this difference?  The answer comes in two parts, for there are two kinds of QD-situations: those in which the subject does seriously question or doubt the trustworthiness of a belief’s source or the reliability of its formation, and those in which she doesn’t but should.  In QD-situations of the first sort, the subject has a believed defeater for her EC-belief in virtue of the fact that she seriously questions or doubts the trustworthiness of the belief’s source.  Given clause (i) of JPF, this believed defeater keeps the EC-belief from being justified. 

     Now consider QD-situations of the second sort, where the subject doesn’t doubt or question but she should.  In such a situation, the subject should (due to what is required by the proper functioning of her defeater systems) have a believed defeater for that belief.  Hence, she avoids having a believed defeater for her EC-belief only because her cognitive faculties (in particular, her defeater systems) aren’t functioning properly in the formation of that belief.  In this case too, the EC-belief isn’t justified, this time because the proper function condition—clause (ii) of JPF—isn’t satisfied.

     Thus, as long as we refrain from having significant questions or doubts about the trustworthiness of our faculties—as long as we maintain a sort of epistemic optimism—and as long as this optimism isn’t contrary to the proper functioning of our truth-aimed faculties, then it won’t be a problem to have our beliefs infected with epistemic circularity.  However, if we stray into significantly questioning or doubting the reliability of our faculties, then, so long as we persist in that mindset, epistemic circularity creates a problem for us.  Thus, some of our beliefs are vulnerable, in a certain way, to a loss of justification since such loss is as near as the termination of our optimism about the trustworthiness of the faculties producing them.  But being vulnerable to a loss of justification is different from actually losing it. 

     By appealing to the above considerations, the Reidian externalist can explain both why it is reasonable to be worried about some cases of epistemic circularity and why not all EC-beliefs are problematic.  The solution is to maintain a sort of epistemic optimism. 


D. Objection 2: Philosophical Irresponsibility


Even supposing the Reidian externalist is right that appropriate epistemic optimism can save us from skeptical doom, the question arises: isn’t the epistemic optimism recommended above philosophically irresponsible (and, therefore, inappropriate)?  Instead of questioning or doubting the reliability of our faculties, the Reidian externalist urges us to follow our inclination to dismiss such questions and doubts as absurd and not worth taking seriously.  But we might expect philosophers of all people—and epistemologists in particular—to respect the obligation implied by Socrates’s maxim that the unexamined life is not worth living.  Philosophical responsibility requires us to question, doubt, and challenge our most deeply held assumptions.  To do anything less might seem to be contrary to our duty not only as philosophers but as rational beings.

     Despite initial appearances, it’s easy to see that following such a policy consistently and determinedly would result in epistemic disaster.  If we called into question the trustworthiness of all of our belief-forming faculties, determining not to trust them until we could first independently verify their reliability, there would be no escape.  What could we use to independently verify any one of them?  Since all our faculties are called into question, none can be used to ease our doubts about another.  Upon recognizing this consequence of following the policy of questioning and doubting everything, one begins to question the wisdom of it.  And there’s the additional worry that the policy seems to be self-undermining in the sense that the policy itself (or at least the principle behind it[34]) should be questioned and doubted if everything should.  These considerations suggest that we need some special reason to be worried about the Reidian externalist’s advocacy of epistemic optimism over questioning and doubting.

     Moreover, consider what goes on in the case of those faculties which are most readily trusted without independent verification: introspection, memory, and a priori intuition—the belief sources that Descartes and contemporary direct acquaintance theorists rely on.  Memory and a priori intuition are absolutely crucial belief sources if one is to do any reasoning at all.  Memory is required because reasoning takes time and it requires a recollection of premises and previous argument steps.  A priori intuition is essential for discerning the validity of our reasoning.  And yet it’s not too difficult to raise skeptical questions about both memory and a priori intuition.  A memory seeming that p is perfectly compatible with the falsity of p.  Surely a clever demon could capitalize on that, even for short-term memory of previous steps in an argument. Likewise, p’s seeming a priori to be obviously necessary seems compatible with the falsity of p.  A clever demon could, it seems, get someone to feel toward some false proposition the way we feel about something as obviously necessary as 2+2=4.  But then how can we be sure that our memory and a priori beliefs aren’t produced by such a demon?  The fact that they seem clearly not to be doesn’t seem decisive given that that’s just how things would seem to the victim of such a demon.  In these cases, it seems that epistemic optimism, which refuses to treat such skeptical scenarios as grounds for significant questions or doubts, is the only sensible recourse.  It also seems to be an epistemically appropriate response.

     The situation might seem otherwise with introspection.  How could our mental life seem to be a certain way when it isn’t that way?   How could we seem to be appeared to redly when we aren’t so appeared to?  How could we seem to be in pain when we aren’t?  Notice first that these rhetorical questions, intended to show that the skeptic can gain no foothold in challenging the trustworthiness of introspection, depend for their effectiveness on our acceptance of certain modal assumptions—assumptions such as “it’s impossible to seem to be appeared to redly when one isn’t so appeared to” or “it’s impossible to seem to be in pain when one isn’t”.  But those modal assumptions are believed on the basis of a priori intuition.  And we’ve already seen how the skeptic can raise questions about the trustworthiness of a priori intuition. 

     Moreover, it’s relatively uncontroversial that it’s possible to have mistaken introspective beliefs (e.g., about what your true motives are or about how many spots are on the visual image before your mind).  Those who claim that introspection is immune to skeptical challenges will of course insist that, although we can be mistaken about some introspective beliefs, there are others about which we can’t be mistaken.  But why couldn’t a deceptive demon get you to feel, about one of these introspective beliefs that are acknowledged to be fallible, the way defenders of introspection feel about the introspective beliefs they think are clearly infallible?  That certainly seems possible.  And if it is possible, then how do you know that isn’t happening to you with those introspective beliefs in which you place the most confidence?  Perhaps they’re all fallible and mistaken and you’re being deceived into viewing them as obviously infallible.  Again, the fact that they seem clearly not to be introspective beliefs of the fallible sort doesn’t seem at all decisive given that that’s just how things would seem to the victim of such a deception.  Here too it seems that epistemic optimism is both the only sensible recourse and an epistemically appropriate response.

     What conclusions should the Reidian externalist draw from these musings on how memory, a priori intuition, and introspection are subject to skeptical challenges that are best handled by epistemic optimism?  It seems she should conclude that the move made in response to such skeptical worries is no different in kind from the move she endorses in her appeal to common sense.  There seems to be no principled reason to object to the Reidian externalist’s epistemic optimism while accepting the epistemic optimism employed in response to the concerns raised above about memory, a priori intuition, and introspection.  In both cases, the skeptical worries cannot be answered on the skeptic’s own terms.  But in each case they are justifiedly viewed as worries not worth taking seriously.  The Reidian externalist can take comfort, therefore, in the realization that (i) the epistemically appropriate response to questions and doubts about at least some of our faculties is to endorse epistemic optimism and (ii) her own commonsensist optimism doesn’t differ in kind from the optimism endorsed by her detractors who rely on either memory, a priori intuition, or introspection.[35]




Alston, William. 1993. The Reliability of Sense Perception.  Ithaca, NY: Cornell University Press.


Bergmann, Michael.  2002.  “Commonsense Naturalism,” in Naturalism Defeated?, James Beilby (ed.).  Ithaca, NY: Cornell University Press.  Pp. 61-90


________.  2004a. “Epistemic Circularity: Malignant and Benign.” Philosophy and Phenomenological Research 69: 709-27.


________.  2004b. “Externalist Justification Without Reliability.” Philosophical Issues, Epistemology 14: 35-60.


________.  2005.  “Defeaters and Higher-Level Requirements.” The Philosophical Quarterly 55: 419-36.


________. 2006a. “A Dilemma for Internalism,” in Knowledge and Reality: Essays in Honor of Alvin Plantinga, Thomas Crisp, Matthew Davidson, and David Vanderlaan (eds.). Springer Science and Business Media.


________.  2006b. Justification Without Awareness.  New York: Oxford University Press.


BonJour, Laurence.  1985.  The Structure of Empirical Knowledge.  Boston, MA: Harvard University Press.


BonJour, Laurence and Ernest Sosa.  2003.  Epistemic Justification. Malden, MA: Blackwell Publishing.


Chisholm, Roderick.  1982.  “The Problem of the Criterion,” in The Foundations of Knowing.  Minneapolis: University of Minnesota Press.  Pp. 61-75.


Cohen, Stewart.  1984.  “Justification and Truth.” Philosophical Studies 46: 279-95.


Comesańa, Juan.  2002.  “The Diagonal and the Demon.” Philosophical Studies 110: 249–266.


Conee, Earl and Richard Feldman.  1985.  “Evidentialism.” Philosophical Studies 48: 15-34.


________.  1998. “The Generality Problem for Reliabilism.” Philosophical Studies 89: 1-29.


________.  2001.  “Internalism Defended,” in Epistemology: Internalism and Externalism, Hilary Kornblith (ed.).  Malden, MA: Blackwell Publishers.  Pp. 230-60.


________.  2005.  “Some Virtues of Evidentialism.” Presented as a symposium paper at the May 2005 Central APA in Chicago, IL.


DeRose, Keith.  1995. “Solving the Skeptical Problem.”  Philosophical Review 104: 1-52.


Feldman, Richard.  2004.  In Search of Internalism and Externalism,” in The Externalist Challenge: New Studies on Cognition and Intentionality, Richard Schantz (ed.).  New York: de Gruyter.  Pp. 143-56.


Foley, Richard.  1985.  “What’s Wrong With Reliabilism?”  The Monist 68: 188-202.


Lehrer, Keith. 1990. Theory of Knowledge.  Boulder: Westview Press.


Moser, Paul.  1985.  Empirical Justification.  Boston: D. Reidel.


Nozick, Robert.  1981.  Philosophical Explanations.  Cambridge, MA: The Belknap Press.


Plantinga, Alvin.  1993.  Warrant and Proper Function.  New York: Oxford University Press.


Reid, Thomas.  1997 [1764]. An Inquiry into the Human Mind on the Principles of Common Sense, Derek Brookes (ed.). Edinburgh: Edinburgh University Press.


________.  2002 [1785].  Essays on the Intellectual Powers of Man, Derek Brookes (ed.). Edinburgh: Edinburgh University Press.


Schaffer, Jonathan.  2004.  “From Contextualism to Contrastivism.” Philosophical Studies 119: 73-103.


Sosa, Ernest.  1991.  “Reliabilism and Intellectual Virtue,” in Knowledge in Perspective: Selected Essays in Epistemology.  New York: Cambridge University Press.  Pp. 131-45.


Wolterstorff, Nicholas.  2001. Thomas Reid and the Story of Epistemology.  New York: Cambridge University Press.

* My thanks to Jeffrey Brower, Trenton Merricks, Duncan Pritchard, and Michael Rea for comments on earlier drafts.

[1] I argue for the possibility of analyses of justification that are neither internalist nor externalist in my 2006a and 2006b: ch. 3.  See also the discussion at the beginning of section I.C below.

[2] A classic and very influential statement of this objection to externalism can be found in BonJour 1985: 41-45.

[3] Notice that weak awareness can involve conceiving too, so long as it doesn’t involve the sort of conceiving that is distinctive of strong awareness.

[4] For a defense of premises (I) and (V) see Bergmann 2006a and 2006b.

[5] Is it possible to conceive of something (i.e., conceptually categorize it) as so-and-so without believing it is?   Perhaps not.  But that only makes matters worse for internalists since it precludes the possibility of this nondoxastic version of strong awareness that I’m considering here.

[6] I’m assuming that if one thinks strong awareness is required for the justification of belief one won’t think such awareness isn’t required for the justification of concept application.

[7] Every process token is an instance of some reliable process type (it is due, in part, to this fact that reliabilism faces the generality problem—see Conee and Feldman 1998 for an explanation).  But reliabilism is committed to the view that for each process token, there is a relevant type of which it is an instance—a type whose reliability (or unreliability) determines the justification (or lack thereof) of the belief produced by the belief-forming process token in question.

[8] For a much more detailed defense of all four of the controversial premises of my dilemma for internalism, see Bergmann 2006a and 2006b: chs. 1-4.

[9] This objection to reliabilist accounts of justification is proposed by Cohen 1984: 280-82, Foley 1985: section I, Lehrer 1990: 166, and Moser 1985: 240-241.  It is not so clearly useful against reliabilist accounts of warrant—that which makes the difference between knowledge and mere true belief.

[10] Tracking accounts of justification say it is necessary for justification that the belief tracks the truth (S’s belief that p tracks the truth just in case S would believe p if p were true and S wouldn’t believe p if p were false).  Virtues accounts of justification say that it is necessary for justification that the belief was produced by a reliable and stable belief-forming habit.

[11] There are attempts to complicate these externalist accounts of justification to handle the New Evil Demon Problem (see Sosa 1991:144, BonJour and Sosa 2003: 156-61, and Comesańa 2002).  But, as I argue in Bergmann 2006b ch. 5, these proposed solutions are ultimately unsatisfying.

[12] As I argue in Bergmann 2006b: ch. 3, Conee and Feldman (2001) commit themselves to this sort of evidentialism when they defend a mentalist version of evidentialism over an accessibilist version of it.

[13] See Bergmann 2006b: ch. 3 for a defense of the view that such evidentialism isn’t an internalist position and also of the view that there are theories of justification that are neither internalist nor externalist.

[14] The connection between Necessity and evidentialism was suggested years ago when Feldman and Conee emphasized that evidentialism is the thesis that justification is determined solely by one’s evidence (Conee and Feldman 1985: 15-16).  They have since made it utterly clear that this is a strong supervenience claim (see Conee and Feldman 2001: 232-34 and Feldman 2004: 155), one that entails that “an epistemically justified doxastic response to a body of evidence is essentially (or necessarily) a justified response” (Conee and Feldman 2005).  This clarification involving an explicit endorsement of Necessity makes perfect sense of their original claim.  After all, if a belief response to evidence were only contingently fitting, that would suggest that the fittingness resulting in justification is not determined by evidence alone.  Instead, it would seem to depend also on whatever it is that determines the contingent fittingness of the belief response to evidence.

[15] Although I think connectors are often present, I don’t want to suggest that they are necessary for justification.

[16] This is perhaps clearer in the case of a blind person.

[17] In order to see that both the doxastic response and the connector in this example are unfitting, it’s important to stipulate that they are unlearned.  For we can imagine a scenario in which one comes to learn through repeated experience that the truth of B1 is associated with experience E2.  See Bergmann 2006b: ch. 5 for further discussion.

[18] Reid defends this view concerning hardness—along with a similar view concerning the relation of tactile (and proprioceptive) sensations to beliefs in extension, motion and shape—in his 1997 [1764]: ch. 5, sects. 2-6 (see especially p. 57).

[19] The scare quotes are to allow for the fact that the design could be by either evolution or a literal designer.

[20] See note 11 for references to other attempts to give an externalist analysis of justification that appeals to reliability without imposing a reliability requirement.  And see Bergmann 2006b: ch. 5 for reasons to prefer my proper functionalist account to these.

[21] But see Bergmann 2004b and 2006b: chs. 5 and 6.

[22] See Wolterstorff 2001: 2-3, 104-5 & 127-28 for an explanation of the way in which proper functionalist accounts are Reidian in inspiration.  Plantinga makes some similar points in his 1993: 50 & 164.

[23] “Another first principle is, that the natural faculties [e.g. sense perception, memory, introspection, etc.], by which we distinguish truth from error are not fallacious.”  (Reid 2002 [1785]: 480)

[24] “We ascribe to reason two offices, or two degrees.  The first is to judge of things self-evident; the second to draw conclusions that are not self-evident from those that are.  The first of these is the province, and the sole province, of common sense ... and is only another name for one branch or degree of reason.” (Reid 2002 [1785]: 433) 

[25] However, as we shall see below, Reid agrees with Alston’s conclusion that in knowing that all our faculties are reliable, we eventually must rely on those very faculties.

[26]  See Reid 2002 [1785]: 481 where he says:

How then come we to be assured of this fundamental truth on which all others rest [i.e., the truth that our natural faculties are not fallacious]?  Perhaps evidence, as in many other respects it resembles light, so in this also, that as light, which is the discoverer of all visible objects, discovers itself at the same time: so evidence, which is the voucher for all truth, vouches for itself at the same time.

[27] See Chisholm 1982.

[28] For a sketch of a defense of the claim that we have a belief source like common sense—a defense that proceeds by noting its similarities to a priori intuition and perception—see Bergmann 2002.

[29] Some, though not all, of the motivation for views like contextualism (DeRose 1995), contrastivism (Schaffer 2004), and denying closure (Nozick 1981) comes from the way those views accommodate the claim that, when we’re considering skeptical worries, we don’t know we aren’t brains in vats or victims of an evil demon.  Insofar as Reidian externalism makes it plausible to deny this claim that those views try to accommodate, it thereby weakens the case in support of those views.

[30] Even if we solve this epistemic circularity problem, there’s another worry.  For in reflecting on this higher-level belief, the believer is faced with the same four options, (a)-(d), that were mentioned at the end of section II.B.  And this seems to force her to form yet another higher-level belief at an even higher level where once again the same problem will arise, leading ultimately to a regress of occurrences of this problem.  For a more developed statement of this worry and a response to it, see Bergmann 2005.

[31] But see Bergmann 2006b: ch. 7 for two arguments in support of the conclusion that epistemic circularity needn’t be a bad thing.  One of these is an improved version of a similar argument given in Bergmann 2004a.

[32] They’re called ‘QD-situations’ because they involve (or at least they should involve) questioning or doubting.

[33] See section II.D below and also Bergmann 2006b: ch. 7 for arguments against the view that the discovery of a belief’s EC-infection should make one have doubts or questions about the reliability of its formation.

[34] I.e., the principle that everything should be questioned and doubted.

[35] For further support of the Reidian externalist’s response to skepticism, see Bergmann 2006b: ch. 8.