The Brain in a Vat Argument
The Brain in a Vat thought-experiment is most commonly used to illustrate global or Cartesian skepticism. You are told to imagine the possibility that at this very moment you are actually a brain hooked up to a sophisticated computer program that can perfectly simulate experiences of the outside world. Here is the skeptical argument. If you cannot now be sure that you are not a brain in a vat, then you cannot rule out the possibility that all of your beliefs about the external world are false. Or, to put it in terms of knowledge claims, we can construct the following skeptical argument. Let “P” stand for any belief or claim about the external world, say, that snow is white.
- If I know that P, then I know that I am not a brain in a vat
- I do not know that I am not a brain in a vat
- Thus, I do not know that P.
The Brain in a Vat Argument is usually taken to be a modern version of René Descartes' argument (in the Meditations on First Philosophy) that centers on the possibility of an evil demon who systematically deceives us. The hypothesis has been the premise behind the movie The Matrix, in which the entire human race has been placed into giant vats and fed a virtual reality at the hands of malignant artificial intelligence (our own creations, of course).
One of the ways some modern philosophers have tried to refute global skepticism is by showing that the Brain in a Vat scenario is not possible. In his Reason, Truth and History (1981), Hilary Putnam first presented the argument that we cannot be brains in a vat, which has since given rise to a large discussion with repercussions for the realism debate and for central theses in the philosophy of language and mind. As we shall see, however, it remains far from clear how exactly Putnam’s argument should be taken and what it actually proves.
Table of Contents
- Skepticism and Realism
- Putnam’s argument
- Reconstructions of the Argument
- Brains in a Vat and Self-Knowledge
- Significance of the Argument
- References and Further Reading
1. Skepticism and Realism
Putnam’s argument is designed to attack the possibility of global skepticism that is implied by metaphysical realism. Putnam defines metaphysical realism as the view which holds that “…the world consists of some fixed totality of mind-independent objects. There is exactly one true and complete description of ‘the way the world is.’ Truth involves some sort of correspondence relation between words or thought-signs and sets of things.” (1981, 49).This construal brings out the idea that for metaphysical realists, truth is not reducible to epistemic notions but concerns the nature of a mind-independent reality. This characterization finds an accurate target in those scientific materialists who believe in a “ready-made” world of scientific kinds independent of human classification and conceptualization. There are, however, many self-professed metaphysical realists who are not happy with Putnam’s definition; it saddles the realist with the classical difficulty of matching words to objects and of providing for a correspondence relation between sentences and mind-independent “facts.” The metaphysical realist is forced to construe her thesis ontologically, as an adherence to some fixed furniture of objects in the world, which ignores the possibility that ontological commitment may be specified not as a commitment to a set of entities but rather to the truth of a class of sentences or even of whole theories of the world.
One proposal is to construe metaphysical realism as the position that there are no a priori epistemically derived constraints on reality (Gaifman, 1993). By stating the thesis negatively, the realist sidesteps the thorny problems concerning correspondence or a “ready made” world, and shifts the burden of proof on the challenger to refute the thesis. One virtue of this construal is that it defines metaphysical realism at a sufficient level of generality to apply to all philosophers who currently espouse metaphysical realism. For Putnam’s metaphysical realist will also agree that truth and reality cannot be subject to “epistemically derived constraints.” This general characterization of metaphysical realism is enough to provide a target for the Brains in a Vat argument. For there is a good argument to the effect that if metaphysical realism is true, then global skepticism is also true, that is, it is possible that all of our referential beliefs about the world are false. As Thomas Nagel puts it, “realism makes skepticism intelligible,” (1986, 73) because once we open the gap between truth and epistemology, we must countenance the possibility that all of our beliefs, no matter how well justified, nevertheless fail to accurately depict the world as it really is. [See Fallibilism.] Donald Davidson also emphasizes this aspect of metaphysical realism: “metaphysical realism is skepticism in one of its traditional garbs. It asks: why couldn’t all my beliefs hang together and yet be comprehensively false about the actual world?” (1986, 309)
The Brain in a Vat scenario is just an illustration of this kind of global skepticism: it depicts a situation where all our beliefs about the world would presumably be false, even though they are well justified. Thus if one can prove that we cannot be brains in a vat, by modus tollens one can prove that metaphysical realism is false. Or, to put it in more schematic form:
- If metaphysical realism is true, then global skepticism is possible
- If global skepticism is possible, then we can be brains in a vat
- But we cannot be brains in a vat
- Thus, metaphysical realism is false (1,2,3)
This article focuses mostly on claim (3), although some philosophers question (2), believing there may be ways of presenting the skeptical thesis even while granting Putnam’s argument.
2. Putnam’s argument
The major premise that underwrites Putnam’s argument is what he calls a “causal constraint” on reference:
(CC) A term refers to an object only if there is an appropriate causal connection between that term and the object
To understand this criterion we need to unravel what is meant by “appropriate causal connection.” If an ant were to accidentally draw a picture of Winston Churchill in the sand, few would claim that the ant represented or referred to Churchill. Similarly, if I accidentally sneeze “Genghis Khan,” just because I verbalize the words does not mean that I refer to the infamous Mongolian conqueror, for I may have never heard of him before. Reference cannot simply be an accident: or, as Putnam puts it, words do not refer to objects “magically” or intrinsically. Now establishing just what would count as necessary and sufficient conditions for a term to refer to an object turns out to be tricky business, and there have been many “causal theories” of reference supplied to do just that. Many have taken the virtue of Putnam’s constraint (CC) to be its generality: it merely states a necessary condition for reference and need not entail anything more controversial. Sometimes it is claimed that endorsing (CC) commits you to semantic externalism but the issues are more complex, since many internalists (for example, John Searle) appear to agree with (CC). The relation between externalism and Putnam’s argument will be considered in more detail later (in the section “Brains in a Vat and Self-Knowledge”).
With the causal constraint established, Putnam goes on to describe the Brain in a Vat scenario. It is important to note exactly what the thought-experiment is, for failure to appreciate the ways in which Putnam has changed the standard skeptical nightmare has lead to many mistaken “refutations” of the argument. The standard picture has a mad-scientist (or race of aliens, or AI programs…) envatting brains in a laboratory then inducing a virtual reality through a sophisticated computer program. On this picture, there is an important difference between viewing the brains from a first or third person viewpoint. There is the point of view of the brains in a vat (henceforth BIVs), and the point of view of someone outside the vat. Clearly when the mad-scientist says “that is a brain in a vat” of a BIV, he would be saying something true, no matter the question of what the BIV means when it says it is a brain in a vat. Furthermore, presumably a BIV could pick up referential terms by borrowing them from the mad-scientist. Thus when a BIV says “there is a tree” referring to a simulation of a tree, it would be saying something false, since its term “tree,” picked up from the mad-scientist to refer to an actual tree, in fact refers to something else, like his sense-impressions of the tree. Putnam thus stipulates that all sentient beings are brains in a vat, hooked up to one another through a powerful computer that has no programmer: “that’s just how the universe is.” We are then asked, given at least the physical possibility of this scenario, whether we could say or think it. Putnam answers that we could not: the assertion “we are brains in a vat” would be sense self-refuting in the same way that the general statement “all general statements are false” is.
The thought-experiment stipulates that brains in a vat would have qualitatively identical thoughts to those unenvatted; or at least they have the same “notional world.” The difference is that in the vat-world, there are no external objects. When a BIV says “There is a tree in front of me,” there is in fact no tree in front of him, only a simulated tree produced by the computer’s program. However, if there are no trees, there could be no causal connection between a BIV’s tokens of trees and actual trees. By (CC), “tree” does not refer to tree. This leads to some interesting consequences.
A standard reading of a BIV’s utterance of “There is a tree” would have the statement come out false, since there are no trees for the BIV to refer to. But that would be only assuming that “tree” refers to tree in the BIV’s language. If “tree” does not refer to tree, then the semantic evaluation of the sentence becomes unclear. Sometimes Putnam suggests that a BIV’s tokens refer to images or sense-impressions. At other times he agrees with Davidson who claims that the truth-conditions would be facts about the electronic impulses of the computer that are causally responsible for producing the sense-impressions. Davidson has a good reason to choose these truth-conditions: through the principle of charity he would want to interpret the BIV’s sentences to come out true, but he would not want the truth-conditions to be phenomenalistic. Thus it turns out that when a BIV says “There is a tree in front of me,” he is saying something true—if in fact the computer is sending the right impulses to him.
Another suggestion is that the truth-conditions of the BIV’s utterances would be empty: the BIV asserts nothing at all. This seems to be rather strong, however: surely the BIV would mean something when it utters “There is a tree in front of me,” even if its statement gets evaluated differently because of the radical difference of its environment. One thing is clear, however; a BIV’s tokening of “tree” or any other such referential term would have a different reference assignment from that of a non-envatted person’s tokenings. According to (CC), my tokening of “tree” refers to trees because there is an appropriate causal link between it and actual trees (assuming of course I am not a BIV). A brain in a vat however would not be able to refer to trees since there are no trees (and even if there were trees there would not be the appropriate causal relation between its tokenings of “tree” and real trees, unless we bring back the standard fantasy and assume it picked up the terms from the mad scientist). Now one might be inclined to think that because there are at least brains and vats in the universe, a BIV would be able to refer to brains and vats. But the tokening of “brain” is never actually caused by a brain except only in the very indirect sense that its brain causes all of its tokenings. The minimal constraint (CC) then will ensure that “brain” and “vat” in the BIV language does not refer to brain and vat.
We are now in a position to give Putnam’s argument. It has the form of a conditional proof :
- Assume we are brains in a vat
- If we are brains in a vat, then “brain” does not refer to brain, and “vat” does not refer to vat (via CC)
- If “brain in a vat” does not refer to brains in a vat, then “we are brains in a vat” is false
- Thus, if we are brains in a vat, then the sentence “We are brains in a vat” is false (1,2,3)
Putnam adds that “we are brains in a vat” is necessarily false, since whenever we assume it is true we can deduce its contradictory. The argument is valid and its soundness seems to depend on the truth of (3), assuming (CC) is true. One immediate problem is determining the truth-conditions for “we are brains in a vat” on the assumption we are brains in a vat, speaking a variation of English (call it Vatese). From (CC) we know that “brains in a vat” does not refer to brain in a vat. But it doesn’t follow from this alone that “we are brains in a vat” is false. Compare:
(A) “Grass is green” is true iff grass is green
(B) “Grass is green” is true iff one has sense-impressions of grass being green
(C) “Grass is green” is true iff one is in electronic state Q
On the assumption that we are brains in a vat, (CC) would appear to rule out (A): “grass” does not refer to grass since there is no appropriate causal connection between “grass” and actual grass. Thus the truth-conditions for the statement “grass is green” would be nonstandard. If we take them to be those captured in (B), then “Grass is green” as spoken by a brain in a vat would be true. Consequently the truth-conditions for “we are brains in a vat” would be captured by (D):
(D) “We are brains in a vat” is true iff we have sense impressions of being brains in a vat
On this construal of the truth-conditions, “We are brains in a vat” as uttered by a BIV would presumably be false, since a brain in a vat would not have sense-impressions of being a brain in a vat: recall a BIV’s notional world would be equivalent to the unenvatted, and he would appear to himself to be a normally embodied person with a real body etc. However, if we follow Davidson and adopt the truth-conditions of (C), we would have the following:
(E) “We are brains in a vat” is true if and only if we are in electronic state Q
Now it is no longer clear that “We are brains in a vat” is false: for if the brain is in the appropriate electronic state, the truth-conditions could well be fulfilled. There are other reconstructions of the argument that do not depend on specifying the truth-conditions of a BIV’s utterances. What is important is the idea that the truth-conditions would be non-standard, as in:
(F) “We are brains in a vat” is true if and only if we are BIVs*
Now since being a BIV* (whatever that is) is not the same as being a BIV, we can construct the following conditional proof argument:
- Assume we are BIVs
- If we are BIVs, “we are brains in a vat” is true if and only we are BIVs*
- If we are BIVs, we are not BIVs*
- If we are BIVs, then “we are BIVs” is false (2,3)
- If we are BIVs, then we are not BIVs (4)
Notice that the argument leaves the antecedent of the conditional open, what Wright calls an “open subjunctive.” We do not want the premises of the argument to be counterfactual, following the train of thought “If we were brains in a vat, the causal constraint would entail that my words ‘brain in a vat’ would come to denote something different, BIV*.” For then we would be assuming that we are not brains in a vat, when that is what the argument is supposed to prove.
Nevertheless, there are still problems with the appeal to disquotation to get us from (4) to (5). Even if, by virtue of the causal constraint, the sentence “We are BIVs” is false, an intuitive objection runs that this change of language should not entail falsity of the proposition that we are brains in a vat. As we shall see, many recent reconstructions of Putnam’s argument are sensitive to this point and try to account for it in various ways. In the following section, I shall focus on two of the more popular reconstructions of the argument put forward by Brueckner (1986) and Wright (1994).
3. Reconstructions of the Argument
Brueckner (1986) argues that even if we grant the reasoning of the above argument up to (4), the most the argument proves is that if we are brains in a vat, then the sentence “We are brains in a vat” (as uttered by a BIV) is false, and that if we are not brains in a vat, then “We are brains in a vat” is false (now expressing a different false proposition). If correct then the argument would prove that whether or not we are brains in a vat, “we are brains in a vat” expresses some false proposition. Assuming the truth-conditions of a BIV would be those captured in (D) we could then devise the following constructive dilemma type argument:
- Either I am a BIV or I am not a BIV
- If I am a BIV, then “I am a BIV” is true iff I have sense impressions of being a BIV
- If I am a BIV, then I do not have sense-impressions of being a BIV
- If I am a BIV, then “I am a BIV” is false (2,3)
- If I am not a BIV, then “I am a BIV” is true iff I am a BIV
- If I am not a BIV, then “I am a BIV” is false (5)
- “I am a BIV” is false (1, 4, 6)
If “I am a BIV” expresses the proposition that I am a brain in a vat, and we know from the argument that “I am a BIV” is false, then it follows that I know I am not a brain in a vat, thus refuting premise (2) of the skeptical argument. However, can I know that “I am a brain in a vat” expresses the proposition that I am a brain in a vat? If I am a brain in a vat, then “I am a brain in a vat” would, via the causal constraint on reference, express some different proposition (say, that I am a brain in a vat in the image). So even if “I am a BIV” is false whether or not I am a BIV, I might not be in the position to identify which false proposition I am expressing, in which case I cannot claim to know that my sentence “I am not a brain in a vat” expresses the true proposition that I am not a brain in a vat.
Some philosophers have gone even further, claiming that if the argument ends here, it actually can be used to strengthen skepticism. The metaphysical realist can claim that there are truths not expressible in any language: perhaps the proposition that we are brains in a vat is true, even if no one can meaningfully utter it. As Nagel puts it:
If I accept the argument, I must conclude that a brain in a vat can’t think truly that it is a brain in a vat, even though others can think this about it. What follows? Only that I cannot express my skepticism by saying “Perhaps I am a brain in a vat.” Instead I must say “Perhaps I can’t even think the truth about what I am, because I lack the necessary concepts and my circumstances make it impossible for me to acquire them!” If this doesn’t qualify as skepticism, I don’t know what does. (Nagel, 1986)
Putnam makes it clear that he is not merely talking about semantics: he wants to provide a metaphysical argument that we cannot be brains in a vat, not just a semantic one that we cannot assert we are. If he is just proving something about meaning, it is open for the skeptic to say that the bonds between language and reality can diverge radically, perhaps in ways we can never discern.
There is yet another worry with the argument, centering once again on the appropriate characterization of the truth-conditions in (2). If one claimed in response to the above objection that in fact I do know that “I am a brain in a vat” expresses the proposition that I am a brain in a vat (whether or not I am a brain in a vat), one may have in mind some general disquotation principle:
(DQ): “Grass is green” is true iff grass is green
If it is an a priori truth that any meaningful sentence in my language homophonically disquotes, then we can a priori know that the following is also true:
(F): “I am a brain in a vat” is true iff I am a brain in a vat
Here is the obvious problem: if we are not to beg the question, we have to be open to the possibility that we are brains in a vat, speaking Vatese. Then we would get:
(G): If I am a BIV, then “I am a BIV” is true iff I am a brain in a vat.
However, (G) gives us truth-conditions that differ from premise (2) of Brueckner’s argument:
(2) If I am a BIV, then “I am a BIV” is true iff I have sense-impressions of being a BIV
If we assume (CC), then (G) and (2) are inconsistent, since the term “BIV” would refer to distinct entities. No contradiction ensues if we assume we are speaking in English: for then (G) would presumably be false (appealing to CC). But the problem is that we cannot beg the question by assuming we are speaking in English: if we assume that, then we know in advance of any argument that we are not speaking in Vatese and hence that we are not brains in a vat. But if we do not know which language we are speaking in, then we cannot properly assert (2).
One response to this is to formulate two different arguments, one whose meta-language is in English, the other whose meta-language is in Vatese, and show that distinct arguments can be run to prove that “I am a BIV” is false. Even if successful, however, these arguments run into the objection canvassed before: if I do not know which language I am speaking in, even if I know “I am a brain in a vat” is false, I do not know which false proposition I am expressing and hence cannot infer that I know that I am not a brain in a vat.
Similar worries plague Crispin’s Wright’s popular formulation of the argument (1994):
- My language disquotes
- In BIVese, “brains in a vat” does not refer to brains in a vat
- In my language, “brains in a vat” is a meaningful expression
- In my language, “brains in a vat” refers to brains in a vat
- My language is not BIVese (2,4)
- If I am a BIV, then my language is BIVese
- I am not a BIV
There are several virtues to this reconstruction: first of all, it gets us to the desired conclusion without specifying what the truth-conditions of a BIV’s utterances would be. They could be sense-impressions, facts about electronic impulses, or the BIV’s sentences may not refer at all. All that is needed for the argument is that there is a difference between the truth-conditions for a BIV’s sentences and those of my own language. The other virtue of the argument is that it clearly brings out the appeal to the disquotation principle that was implicit in the previous arguments. If indeed (DQ) is an a priori truth, as many philosophers maintain, and if we accept (CC) as a condition of reference, the argument appears to be sound. So have we proven that we are not brains in a vat?
Not so fast. The previous objection can be restated: if I do not yet know whether or not I am a brain in a vat before the argument is completed, I do not know which language I am speaking (English or Vatese). If I am speaking Vatese, then so long as it is a meaningful language, I can appeal to disquotation to establish that “brains in a vat” does refer to brains in a vat. But this contradicts premise (2). The problem seems to be that (DQ) is being used too liberally. Clearly we do not want to say that every meaningful term disquotes in the strong sense required for reference. If so, we could take it to be an a priori truth that “Santa Claus” refers to Santa Claus. But “Santa Claus” does not refer to Santa Claus, since there is no Santa Claus. We could introduce a new term “pseudo-reference” and hold that “Santa Claus” pseudo-refers to Santa Claus, and then attach further conditions on reference in order to establish what it would take for the term to truly refer. One proposal (Weiss, 2000) is the following principle:
W: If “x” psuedo-refers to x in L, and x exists, then “x” refers to x in L
Thus, given the disquotation principle we know that in my language “Santa Claus” pseudo-refers to Santa Claus. Supposing to the joyful adulation of millions that Santa Claus is discovered to actually exist, then given (W) “Santa Claus” refers to Santa Claus. Now this also seems too simplistic: as Putnam pointed out, in order for a term to refer to an object we must establish more than the mere existence of the object. There has to be the appropriate causal relation between the word and object, or we are back to claiming that in accidentally sneezing “Genghis Khan” I am referring to Genghis Khan. But whether we accept (W) or attach stronger conditions to reference, it is clear that any such move would make Wright’s formulation invalid. For then we would have:
- My language disquotes
- In BIVese, “brains in a vat” does not refer to brains in a vat (CC)
- In my language “brain in a vat” is a meaningful expression
- In my language, “brain in a vat” pseudo-refers to brains in a vat (DQ)
- My language is not BIVese (2,4)
- If I am a BIV, then my language is BIVese
- I am not a BIV
(5) no longer follows from (2) and (4) given the ambiguity of “refers” in (2) and (4). If on the other hand we insist on a univocal sense of reference, then either (2) will contradict the (DQ) principle, or we are not entitled to appeal to (1), insofar as it would beg the question that we are speaking English, a language for which the (DQ) principle applies.
4. Brains in a Vat and Self-Knowledge
Ted Warfield (1995) has sought to provide an argument that we are not brains in a vat based on considerations of self-knowledge. He defends two premises that seem reasonably true, and then he argues for the desired metaphysical conclusion:
- I think that water is wet
- No brain in a vat can think that water is wet
- Thus, I am not a brain in a vat (2.3)
Premise (1) is said to follow from the thesis of privileged access, which holds that we can at least know the contents of our own occurring thoughts without empirical investigation of our environment or behavior. Warfield’s strategy is to present each premise as non-question begging against the global skeptic, in which case at no point can we appeal to the external environment as justification. Since the thesis of privileged access is said to be known a priori whether we are brains in a vat or not, premise (1) can be known non-empirically.
Premise (2) is a little trickier to establish non-empirically. The main argument for it is by analogy with other arguments in the literature that have been used to establish content externalism. The main strategy is derived from Putnam’s Twin Earth argument (1975): imagine a world that is indistinguishable from Earth except for one detail: the odorless, drinkable liquid that flows in the rivers and oceans is composed of the chemicals XYZ and not H20. If we take Oscar on Earth and his twin on Twin-earth, Putnam argues that they would refer to two different substances and hence mean two different things: when Oscar says “pass me some water” he refers to H20 and means water, but when Twin-Oscar says “pass me some water” he refers to XYZ and thus means twin-water. As Burge and others have pointed out, if the meaning of their words are different, then the concepts that compose their beliefs should differ as well, in which case Oscar would believe that water is wet whereas Twin-Oscar would believe that twin-water is wet. While Putnam’s original slogan was “meanings just ain’t in the head,” the argument can be extended to beliefs as well: “beliefs just ain’t in the head,” but depend crucially on the layout of one’s environment.
If we accept content externalism, then the motivation for (2) is as follows. In order for someone’s belief to be about water, there must be water in that person’s environment: externalism rejects the Cartesian idea that one can simply read off one’s belief internally (if so then we would have to say that Oscar and his twin have the same beliefs since they are internally the same). So it doesn’t seem possible that a BIV could ever come to hold a belief about water (unless of course he picked up the term from the mad-scientist or someone outside the vat, but here we must assume again Putnam’s scenario that there is no mad-scientist or anyone else he could have borrowed the term from). As Warfield puts it, premise (2) is a conceptual truth, established on the basis of Twin-earth style arguments, a matter of “armchair” a priori reflection and thus able to be established non-empirically.
The problem with establishing (2) non-empirically though is that the externalist arguments succeed only on the assumption that our own use of “water” refers to a substantial kind, and this seems to be a matter of empirical investigation. Imagine a world where “water” does not refer to any liquid substance but is rather a complex hallucination that never gets discovered. On this “Dry Earth,” “water” would not refer to a substantial kind but rather a superficial kind. The analogy to the BIV case is clear: since it is not an a priori truth that “water” refers to a substantial kind in the BIV’s language, it cannot be known non-empirically that “water” is substantial or superficial; if it is a superficial kind, then a BIV could very well think that water is wet so long as it has the relevant sense-impressions.
5. Significance of the Argument
Some philosophers have claimed that even if Putnam’s argument is sound, it doesn’t do much to dislodge Cartesian or global skepticism. Crispin Wright (1994) argues that the argument does not affect certain versions of the Cartesian nightmare, such as my brain being taken out of my skull last night and hooked up to a computer. Someone of a Positivist bent might argue that if there is no empirical evidence to appeal to in order to establish whether we are brains in a vat or not, then the hypothesis is meaningless, in which case we do not need an argument to refute it. While few philosophers today would hold onto such a strong verifiability theory of meaning, many would maintain that such metaphysical possibilities do not amount to real cases of doubt and thus can be summarily dismissed. Still others see the possibility of being a brain in a vat an important challenge for cognitive science and the attempt to create a computer model of the world that can simulate human cognition. Dennett (1991) for example has argued that it is physically impossible for a brain in a vat to replicate the qualitative phenomenology of a non-envatted human being. Nevertheless, one should hesitate before making possibility claims when it comes to future technology. And as films like the Matrix, Existenz, and even the Truman Show indicate, the idea of living in a simulated world indistinguishable from the real one is likely to continue to fascinate the human mind for many years to come—whether or not it is a brain in a vat.
6. References and Further Reading
- Boghossian, Paul. 1999. What the Externalist can Know A Priori. Philosophical Issues 9
- Brueckner, Anthony. 1986. Brains in a Vat. Journal of Philosophy 83: 148-67
- Brueckner, Anthony. 1992. If I am a Brain in a vat, then I am not a Brain in a Vat. Mind 101:123-128
- Burge, T. 1982. Other Bodies. In A. Woodfield. Ed., Thought and Object: Essays on Intentionality. Oxford: Oxford University Press, 91-120.
- Casati, R. and Dokic J. 1991. Brains in a Vat, Language and Metalanguage. Analysis 51: 91-93.
- Collier, J. 1990. Could I Conceive Being a Brain in a Vat? Australasian Journal of Philosophy 68: 413-419.
- Davidson, Donald. 1986. “A Coherence Theory of Truth and Knowledge,” in Truth and Interpretation: Perspectives on the Philosophy of Donald Davidson. Oxford: Blackwell.
- Davies, D. 1995. Putnam’s Thought-Teaser. Canadian Journal of Philosophy 25(2):203-227.
- Ebbs, G. (1992), “Skepticism, Objectivity and Brains in Vats”, Pacific Philosophical Quarterly 73
- Forbes, G. 1995. Realism and Skepticism: Brains in a Vat Revisited. Journal of Philosophy 92(4): 205-222
- Gaifman, Haim. 1994. Metaphysical Realism and Vats in a Brain. (unpublished ms)
- Nagel, Thomas. 1986. The View from Nowhere. Cambridge: Cambridge University Press.
- Noonan, Harold. 1998. Reflections on Putnam, Wright and brains in a vat. Analysis 58:59-62
- Putnam, Hilary 1975. The Meaning of “Meaning.” Mind, Language and Reality: Philosophical Papers, Vol 1. Cambridge: Cambridge University Press
- Putnam, Hilary. 1982. Reason, Truth and History. Cambridge: Cambridge University Press.
- Putnam, Hilary. 1994. Reply to Wright. In P. Clark and B. Hale, eds. Reading Putnam. Oxford, Blackwell.
- Sawyer, Sarah. 1999. My Language Disquotes. Analysis, vol. 59:3: 206-211
- Smith, P. (1984), Could We Be Brains in a Vat?, Canadian Journal of Philosophy 14
- Steinitz, Y. Brains in a vat? Different Perspectives. Philosophical Quarterly 44 (175): 213-222
- Tymoczko, T. 1989. In Defense of Putnam’s Brains. Philosophical Studies 57(3) 281-297
- Warfield, Ted. 1995. Knowing the World and Knowing our Minds. Philosophy and Phenomenological Research 55 (3): 525-545.
- Weiss, B. 2000. Generalizing Brains in a Vat. Analysis 60: 112-123
- Wright, Crispin. 1994. On Putnam’s Proof that we cannot be brains in a vat. In P. Clark and B. Hale. Eds, Reading Putnam. Oxford: Blackwell.
Lance P. Hickey
Southern Connecticut State University
U. S. A.
1. Skeptical Hypotheses and the Skeptical Argument
The Cartesian skeptic puts forward various logically possible skeptical hypotheses for our consideration, such as that you are now merely dreaming that you are reading an encyclopedia entry. The more radical Evil Genius hypothesis is this: you inhabit a world consisting of just you and a God-like Evil Genius bent on deceiving you. In the Evil Genius world, nothing physical exists, and all of your experiences are directly caused by the Evil Genius. So your experiences, which represent there to be an external world of physical objects (including your body), give rise to systematically mistaken beliefs about your world (such as that you are now sitting at a computer). (For an overview of the problem of external world skepticism, see Greco 2007.) Some philosophers would deny that the Evil Genius hypothesis is genuinely logically possible. Materialists who hold that the mind is a complex physical system deny that it is possible for there to be an Evil Genius world, since, on their view, your mind could not possibly exist in a matterless world. Accordingly, a modern skeptic will have us consider an updated skeptical hypothesis that is consistent with materialism. Consider the hypothesis that you are a disembodied brain floating in a vat of nutrient fluids. This brain is connected to a supercomputer whose program produces electrical impulses that stimulate the brain in just the way that normal brains are stimulated as a result of perceiving external objects in the normal way. (The movie ‘The Matrix’ depicts embodied brains which are so stimulated, while their bodies float in a vats.) If you are a brain in a vat, then you have experiences that are qualitatively indistinguishable from those of a normal perceiver. If you come to believe, on the basis of your computer-induced experiences, that you are looking at at tree, then you are sadly mistaken.
After having sketched this brain-in-a-vat hypothesis, the skeptic issues a challenge: can you rule out the possibility described in the hypothesis? Do you know that the hypothesis is false? The skeptic now argues as follows. Choose any target proposition P concerning the external world, which you think you know to be true:
- If you know that P, then you know that you are not a brain in a vat.
- You do not know that you are not a brain in a vat. So,
- You do not know that P.
Premise (1) is backed by the principle that knowledge is closed under known entailment:
(CL) For all S,α,β: If S knows that α, and S knows that α entails β, then S knows that β.
Since you know that P entails that you are not a brain in a vat (for example, let P = You are sitting at a computer), by (CL) you know that P only if you know its entailed consequence: you are not a brain in a vat. Premise (2) is backed by the consideration that your experiences do not allow you to discriminate between the hypothesis that you are not a brain in a vat (but rather a normal human) from the hypothesis that you are a brain in a vat. Your experience would be the same regardless of which hypothesis were true. So you do not know that you are not a brain in a vat.
2. Putnam's BIVs and the Disjunctive Argument
In a famous discussion, Hilary Putnam has us consider a special version of the brain-in-a-vat hypothesis. Imagine that you are a brain in a vat in a world in which the only objects are brains, a vat, and a laboratory containing supercomputers that stimulate the envatted brains. Imagine further that this situation has arisen completely randomly, and that the brains have always been envatted. No evil neuroscientists or renegade machines have brought about the brains' envatment. Call such a special brain in a vat a ‘BIV’. A skeptical argument just like that above can be formulated using the BIV hypothesis. Putting things now in the first person, Putnam argues that I can establish that I am not a BIV by appeal to semantic considerations alone—considerations concerning reference and truth. This will block the BIV version of the skeptical argument.
Here is how Putnam motivates his anti-skeptical semantic considerations. Suppose that there are no trees on Mars and that a Martian forms a mental image exactly resembling one of my tree-images as a result of perceiving a blob of paint that accidentally resembles a tree. Putnam's intuition is that the Martian's image is not a representation of a tree. This is due to the lack of any causal connection between the image and trees (even, we will suppose, any attenuated causal connection such as interaction with a visiting Earthling who has seen trees). If I were a BIV, then my mental image resembling a tree would no more be a representation of a tree than would the Martian's mental image. Neither of us would have the sort of causal contact with trees which is required for our images to refer to trees. The same reasoning applies to any tokens of the word ‘tree’ which might come to be uttered (or thought) by the Martian and by the BIV. (In speaking about BIVs, we will use ‘utter’ to mean, in effect, ‘seem to utter’, since a BIV cannot speak or write, but only seems to himself to be speaking or writing. Similar remarks apply to ‘speak’.)
What does the BIV's token of ‘tree’ refer to, if not to trees? Putnam offers three possibilities:
- to ‘trees-in-the-image’ (I take it that by ‘the image’, Putnam means the succession of experiences had by the BIV),
- to the electrical impulses that stimulate the brain and thereby cause it to have experiences just like those a normal human has when it sees a tree, and
- to the computer program features that are causally responsible for the stimuli described in (ii) and thus the experiences described in (i).
On the natural, pre-Putnam assignment of references which one would make in evaluating the truth value of a BIV's utterance of ‘Here is a tree’, we would hold that the brain's token of ‘tree’ refers to trees and, hence, that his sentence token is false, since he is not near a tree. On each of Putnam's proposed reference assignments, though, the brain's sentence token comes out true (provided that the brain is indeed being stimulated so as to have experiences just like those a normal human has when seeing a tree and that the stimulation is caused by the appropriate electrical impulses generated by a computer's program features). On account (i), for example, the BIV's utterance of ‘Here is a tree’ is true iff the BIV is having experiences as of being near a tree.
Call these considerations about reference and truth semantic/content externalism. This view denies a crucial Cartesian assumption about mind and language, viz., that the BIV's sentences express systematically mistaken beliefs about his world, the very same beliefs had by a normal counterpart to the BIV, with matching experiences. On the contrary: the BIV's sentences differ in reference and truth conditions (and, accordingly, in meaning) from those of his normal counterpart. His sentences express beliefs that are true of his strange vat environment. The differences in the semantic features of the sentences used by the BIV and those used by his normal counterpart are induced by the differences in the beings' external, causal environments.
Account (iii) of the referents of the BIV's words gives the most plausible semantic/content externalist reference assignment, since recurring program features that systematically cause the BIV's ‘treeish’ experiences play a causal role vis a vis the BIV's uses of ‘tree’ that is analogous to the causal role played by trees vis a vis a normal human's uses of ‘tree’.
Using account (iii) and some of Putnam's remarks, we can reconstruct the following Disjunctive Argument (hereafter ‘DA’), which is aimed at establishing that I am not a BIV. If DA succeeds, then we have a response to a skeptical argument involving the BIV hypothesis which shares the form of the Cartesian argument (1)-(3) above. If DA succeeds, then it generates knowledge that I am not a BIV. Thus we would have a response to the skeptic's claim that since I do not know that I am not a BIV, then I do not know any target external-world proposition P.
Let ‘vat-English’ refer to the language of the BIV, let ‘brain*’ refer to the computer program feature that causes experiences in the BIV that are qualitatively indistinguishable from normal experiences that represent brains, and let ‘vat*’ refer to the computer program feature that cause experiences that are qualitatively indistinguishable from normal experiences that represent vats. A BIV, then, is not a brain* in a vat*: a BIV is not a certain computer program feature located in a certain other computer program feature. Here is DA:
- Either I am a BIV (speaking vat-English) or I am a non-BIV (speaking English).
- If I am a BIV (speaking vat-English), then my utterances of ‘I am a BIV’ are true iff I am a brain* in a vat*.
- If I am a BIV (speaking vat-English), then I am not a brain* in a vat*.
- If I am a BIV (speaking vat-English), then my utterances of ‘I am a BIV’ are false. [(b),(c)]
- If I am a non-BIV (speaking English), then my utterances of ‘I am a BIV’ are true iff I am a BIV.
- If I am a non-BIV (speaking English), then my utterances of ‘I am a BIV’ are false. [(e)]
- My utterances of ‘I am a BIV’ are false. [(a),(d),(f)]
DA stops short of delivering the desired result, namely a proof of
(¬SK) I am not a BIV.
To establish (¬SK) we need to add a couple of further steps:
(h) My utterances of ‘I am not a BIV’ are true.
(T) My utterances of ‘I am not a BIV’ are true iff I am not a BIV.
(¬SK) follows from (h) and (T). Step (h) itself follows from (g) on natural assumptions about negation, truth, and quotation, but (T) is problematic in the current anti-skeptical context. The assumption of (T) seems to beg the question against the skeptic. Putnam's semantic externalist picture is this: if I am an non-BIV (speaking English) then (T) is the correct statement of the truth conditions of my sentence ‘I am a BIV’, using the device of disquotation; but if instead I am a BIV (speaking vat-English), then the correct statement of my sentence's truth conditions is the strange one given in (b) of DA, not using the device of disquotation. So in order to know that (T) is the correct statement of my sentence's truth conditions, I need to know that I am a non-BIV (speaking English). But that is what the anti-skeptical argument was supposed to prove (Brueckner 1986). According to this objection, Supplemented DA (DA plus (h) and (T)) is epistemically circular, in William Alston's sense: knowledge of one of its premises—(T)—requires knowledge of its conclusion (Alston 1989).
3. The Simple Arguments
Let us consider two other reconstructions of Putnam's thinking regarding BIVs. Here is Simple Argument 1 (‘SA1’—see Brueckner 2003):
- If I am a BIV, then my word ‘tree’ does not refer to trees.
- My word ‘tree’ refers to trees. So,
- I am not a BIV. [(A),(B)]
We will discuss (B) below. Premise (A) comes from Putnam's semantic externalism, as seen above. DA's claims about the BIV's sentences' truth conditions are grounded in claims about reference such as (A): since the BIV's words differ in their referents from the corresponding words of a normal speaker, the BIV's sentences accordingly differ in their truth conditions from the corresponding sentences of a normal speaker.
The semantic differences just mentioned induce differences at the level of thought content that are exploited in the following Simple Argument 2 (‘SA2’—Brueckner 2003, Ebbs 1992, Tymoczko 1989):
- If I am a BIV, then I am not thinking that trees are green.
- I am thinking that trees are green. So,
- I am not a BIV.
We will discuss (E) below. Regarding (D): since the BIV's word ‘tree’ does not refer to trees when he uses the sentence ‘Trees are green’ as a vehicle for thinking a thought, his thought does not have the content that trees are green. Rather, it has some content concerning tree*'s, that is, computer program features that cause in the BIV experiences that are qualitatively indistinguishable from normal experiences that represent trees. Perhaps the content is something like this: the program feature that causes ‘treeish’ experience is associated with a program feature that causes experiences that are qualitatively indistinguishable from normal experiences that represent objects as being green.
SA2 highlights the connection between semantic externalism and the mind. Not only do meaning, reference, and truth depend upon one's external environment in the ways we have discussed; further, the representational contents of one's thoughts, beliefs, desires and other propositional attitudes also depend upon circumstances external to one's mind.
The simple arguments are simpler than DA, and they also do not commit the anti-skeptic to a specification of the referents of the BIV's words and the contents of its thoughts. The arguments rest only upon the claim that the referents and contents in question differ from my referents and contents. Another advantage of the Simple Arguments is that they do not, on the face of it, seem to beg the question against the skeptic, as did DA when supplemented so that it validly implied the conclusion (¬SK): that I am not a BIV.
4. Objections and Responses
Let us now turn to an objection to SA1. Though the argument does not obviously require knowledge that I am a non-BIV (speaking English), as Supplemented DA seemed to, its premise (B) does seem upon reflection to be question-begging. On a natural understanding of (B), the truth of this premise requires the existence of trees as referents for my word ‘tree’. So to know that (B) is true, I would need to know that I am a non-BIV in a world containing trees, rather than a BIV in a treeless vat world. This problem infects SA2 as well, since my ground for holding that I can think tree-thoughts while the BIV cannot is ultimately the claim that the words we use to express our respective thoughts differ in reference (trees versus things that are not trees, such as tree*'s).
SA1 can be modified so as to avoid this objection (Brueckner 2003):
A*. If I am a BIV, then it is not the case that if my word ‘tree’ refers, then it refers to trees.
B*. If my word ‘tree’ refers, then it refers to trees. So,
C. I am not a BIV.
Premise (A*) comes from semantic/content externalism. Re premise (B*): knowledge that there are trees in my world is not required in order to justify this premise. But a problem still remains. In order to know (B*), don't I need to know that I am a non-BIV (speaking English), so that I can use the device of disquotation in stating the referents of my words (if they do have referents at all)?
A similar worry can be laid at the door of SA2. In order to know its second premise, (E), I need to know what I am now thinking. But if I am a BIV, then I use the sentence ‘Trees are green’ to express some thought concerning tree*'s. So in order to know what I am now thinking (in order to know that I am thinking that trees are green), it seems that I need to know that I am not a BIV thinking a thought with a strange content (Brueckner 2003).
A reasonable response to the foregoing objection to Modified SA1 is as follows. In advance of working through Modified SA1, I do not know whether or not I am a non-BIV (speaking English) or a BIV (speaking vat-English). But I do know certain things about my own language (whatever it is and wherever I am speaking it). By virtue of knowing the meaning of ‘refers’ and the meaning of quotation marks, I know that disquotation can be correctly applied to any successfully referring term of my language, in the way that (B*) indicates for my word ‘tree’. This is a priori knowledge of semantic features of my own language (whatever it is—English or vat-English). I know (A*) in virtue of my a priori, philosophical knowledge of the theory of semantic externalism and of how it applies to the case of the BIV. Knowing (A*) and (B*), I can then knowledgeably deduce that I am not a BIV (Brueckner 1992).
A similar response to the foregoing objection to SA2 is that I have knowledge of my own mind that is not experientially based. I can gain the knowledge that I am now thinking that trees are green via introspection. Putting this self-knowledge together with my a priori, philosophical knowledge of SA2's first premise, (D), (knowledge based upon my understanding of semantic externalism), I can then knowledgeably deduce that I am not a BIV. A problem for this response has been raised by various philosophers. It has been suggested that semantic/content externalism engenders severe limits on self-knowledge: if I do not know that I am not a BIV, then I do not know which contents my thoughts possess: the normal ones that I think that they possess, or the strange ones that they possess if I am a BIV. So the response we have considered may be in trouble if semantic externalism gives rise to such skepticism about knowledge of content. (Ludlow and Martin 1998)
The foregoing defenses of the Simple Arguments emphasize a constraint on anti-skeptical arguments: their premises must be knowable a priori. The justification of their premises must not require any appeal to the deliverances of sense-experience. Now Modified SA1 is driven by the following thought: the referent of the BIV's ‘tree’ is something strange, viz., tree*'s (certain computer program features); but the referent of my ‘tree’ (if such there be) is trees; so I am not a BIV. This thought in turn rests upon the natural assumption that trees are not computer program features. But is that assumption something that I know a priori? In work unrelated to skepticism, Putnam has claimed that even though it is necessary that cats are animals (just as it is necessary that water is H2O), it is not knowable a priori that cats are animals (just as it is not knowable a priori that water is H2O). According to Putnam, the concept of cat allows that in advance of gaining knowledge of their inner structure, cats could turn out to be robots. The worry is that in a similar way, the concept of tree is such that in advance of gaining knowledge of the existence and nature of trees, trees could turn out to be computer program features. If I hold in abeyance my seeming a posteriori knowledge about trees, then, I cannot fairly say that in the vat world, there are no trees. Thus, I do not know a priori that the BIV's word ‘tree’ refers to things other than trees (in virtue of referring to computer program features which are distinct from trees) (Brueckner 2005).
This objection to Modified SA1 can be answered by focusing upon the dialectical situation between skeptic and anti-skeptic. The skeptic wishes to impugn my seeming knowledge of the external world by putting forward a skeptical hypothesis that is incompatible with the external-world propositions I believe. We are considering the skeptical hypothesis SK (= I am a BIV). On the current objection to our anti-skeptical argument, the skeptical critic undermines his own position by suggesting that SK is compatible with external-world propositions such as that I am in the presence of green trees. I can now argue as follows in response to the skeptic's current objection. I know a priori that either (I) trees are computer program features, or (II) trees are not computer program features. On the first alternative, the skeptic undermines his own overall position, and on the second alternative, the skeptic's objection is withdrawn. So we could view Modified SA1 as being an argument by cases: it is not known a priori which case obtains, but it is known a priori that the skeptic loses in each case.
Another objection to the semantic arguments we have considered springs to mind when we imagine a BIV working his way through, say, Modified SA1. When the BIV thinks thoughts via the sentences (A*), (B*), and (C), he is not, for example, thinking about trees when he thinks his second premise. The (used) occurrence of the word ‘trees’ in his premise does not refer to trees but rather to something else—tree*'s, that is, certain computer program features. Understood in this way, his second premise is true. His first premise concerns the referent of his word ‘tree’ on condition that he is a brain* in a vat*. Thus, the BIV's first premise is true in virtue of having a necessarily false antecedent (since it is not logically possible for him to be a computer program feature). So the BIV's version of Modified SA1 is sound. But he uses the argument to prove the conclusion that he is not a brain* in a vat*, rather than the conclusion that he is not a BIV.
The following worry arises. Perhaps I am a BIV who uses Modified SA1 to prove that I am not a brain* in a vat*, rather than the desired result that I am not a BIV. However, this worry is unfounded. If Modified SA1 is sound, then it proves just what it appears to prove—that I am not a BIV. Just read the argument carefully when you work through it! It makes no difference to my argumentative situation if someone on Alpha Centauri uses those very sentences with different meanings from mine and proves that muons move rapidly (Johnsen 2003, Brueckner 2004).
A final objection to the semantic arguments is hard to dispute. The problem is the narrow scope of the arguments. They cannot prove that I am not a recently disembodied brain in a vat (as opposed to a Putnamian BIV). If I have been speaking English up until my recent envatment, then my words will retain their English referents (to trees and so on) and my thoughts will retain their normal contents (about trees and so on). Thus, the Putnamian semantic externalist considerations will find no purchase against the skeptical hypothesis that I am a fledgling brain in a vat (Brueckner 1986). However, in such a “recent envatment” scenario, the pertinent skeptical argument leaves unscathed many of my knowledge-claims (such as that I was born in the USA, that I own a black cat,…) So the “recent envatment” scenario lacks the skeptical power of the Putnamian BIV scenario.
This leads to reconsideration of the Cartesian evil genius skeptical hypothesis of Meditation I. Recall that in the Cartesian scenario, all that exists is my mind just as it actually is and a God-like Evil Genius that directly causes my mental states. Nothing physical exists. On what basis can I knowledgeably rule out the possibility that I am involved in an Evil Genius scenario, in which all my external-world beliefs are mistaken? If I cannot rule it out that I am in such a scenario, then, according to Descartes, I do not know any of the external-world propositions that I claim to know. The Putnamian semantic considerations over which we have obsessed can be brought to bear against the foregoing radical skeptical scenario. It seems that we should assign referents to the terms of the evil genius "victim" that are analogous to the “computer program feature” referents in the BIV story—referents that are states of the evil genius, those systematically causally responsible for, say, my “treeish” experience. So contra Descartes, if I were the "victim" of an Evil Genius, I would not have thoroughly mistaken beliefs about things apart from my mind. Instead, I would have as many correct beliefs about things apart from my mind (in this case beliefs about states of the Evil Genius) as does a normal thinker in a normal environment. An analogue to a “simple argument” could also be constructed against the traditional Cartesian hypothesis.
Finally, two other, more radical skeptical hypotheses that are left unscathed by semantic externalism are that (1) I am a brain in a vat whose experiences are randomly caused by a supercomputer, or (2) there is a whimsical evil demon inducing my experiences with no stable mental sources to serve as referents. In such scenarios, there are no systematic causal connections, for example, between the computer program features or the nature of the demon and my recurring ‘treeish’ experiences. The semantic externalist would say that, in such scenarios, my words fail to refer to things in my world, and no truth conditions can be properly assigned to my sentences. These sentences accordingly fail to express contentful thoughts. On these radical skeptical hypotheses, I am asked, then, to countenance the (alleged) possibility that I am not thinking contentful thoughts via meaningful sentences with reference and truth conditions. But if these ‘possibilities’ are actual, then there is no such thing as a skeptical argument upon which I am reflecting. Thus, these radical skeptical hypotheses may well in the end undermine themselves.
The brain-in-a-vat hypotheses are crucial for the formulation of skeptical arguments concerning the possibility of knowledge of the external world that are modeled on the Cartesian Evil Genius argument. We have seen that the BIV hypothesis may well be refutable, given semantic/content externalism and given the assumption that one has a priori knowledge of some key semantic properties of one's language (or, alternatively, a priori knowledge of the contents of one's mental states). Even if Putnamian arguments fail to rule out all versions of the brain-in-a-vat hypotheses, their success against the radical BIV hypothesis would be significant. Further, these arguments highlight a novel view of the relations between mind, language, and the external world.
- Alston, W., 1989, “Epistemic Circularity”, in his Epistemic Justification: Essays in the Theory of Knowledge, Ithaca: Cornell University Press.
- Brueckner, A., 1986, “Brains in a Vat”, Journal of Philosophy, 83(3): 148–167.
- –––, 1992, “Semantic Answers to Skepticism”, Pacific Philosophical Quarterly, 73(3): 200–219; reprinted in DeRose and Warfield (eds.) 1999, pp. 43–60.
- –––, 1994, “Ebbs on Skepticism, Objectivity and Brains in Vats”, Pacific Philosophical Quarterly, 75: 77–87.
- –––, 1999, “Transcendental Arguments from Content Externalism”, in R. Stern (ed.), Transcendental Arguments: Problems and Prospects, Oxford: Clarendon Press.
- –––, 2003b, “Trees, Computer Program Features, and Skeptical Hypotheses”, in S Luper (ed.), The Skeptics: Contemporary Essays, Burlington: Ashgate.
- –––, 2004, “Johnsen on Brains in Vats”, Philosophical Studies, 129(3): 435–440.
- –––, 2010, Essays on Skepticism, Oxford: Oxford University Press.
- –––, 2011, “Skepticism and Semantic Externalism”, in S. Bernecker and D. Pritchard (eds.), The Routledge Companion to Epistemology, New York: Routledge.
- Brueckner, A. and G. Ebbs, forthcoming, Self-Knowledge in Doubt, Cambridge: Cambridge University Press.
- Christensen, D., 1993, “Skeptical Problems, Semantical Solution”, Philosophy and Phenomenological Research, 53(2): 301–321.
- Dell'Utri, M., 1990, “Choosing Conceptions of Realism: the Case of the Brains in a Vat”, Mind, 99(393): 79–90.
- DeRose, Keith, and T. Warfield (eds.), 1999, Skepticism: a Contemporary Reader, Oxford: Oxford University Press.
- Ebbs, G., 1992, “Skepticism, Objectivity and Brains in Vats”, Pacific Philosophical Quarterly, 73(3): 239–266.
- –––, 1996, “Can We Take Our Own Words at Face Value?”, Philosophy and Phenomenological Research, 56: 499–530.
- –––, 2001, “Is Skepticism about Self-Knowledge Coherent?”, Philosophical Studies, 105(1): 43–58.
- Forbes, G., 1995, “Realism and Skepticism: Brains in a Vat Revisited”, Journal of Philosophy, 92 (4): 205–222; reprinted in DeRose and Warfield (eds.) 1999, pp. 61–75.
- Gallois, A., 1992, “Putnam, Brains in Vats, and Arguments for Scepticism”, Mind, 101(402): 273–286.
- Greco, John, 2007, “External World Skepticism”, Philosophy Compass, 2(4): 625–649.
- Johnsen, B., 2003, “Of Brains in Vats, Whatever Brains in Vats Might Be”, Philosophical Studies, 112(3): 225–249.
- Ludlow, P. and N. Martin (eds.), 1998, Externalism and Self-knowledge, Stanford: CSLI Publications.
- McIntyre, J., 1984, “Putnam's Brains”, Analysis, 44: 59–61.
- Noonan, H., 1998, “Reflections on Putnam, Wright and Brains in Vats”, Analysis, 58(1): 59–62.
- Nuccetelli, S. (ed.), 2003, New Essays on Semantic Externalism and Self-Knowledge, Cambridge, MA: MIT Press.
- Putnam, H., 1981, Reason, Truth, and History, Cambridge: Cambridge University Press, Chapter 1, pp. 1–21; reprinted as “Brains in a Vat”, in DeRose and Warfield (eds.) 1999, Chapter 2, pp. 27–42.
- Roth, M. and G. Ross (eds.), 1989, Doubting: Contemporary Perspectives on Skepticism, Dordrecht: Kluwer.
- Smith, P., 1984, “Could We Be Brains in a Vat?”, Canadian Journal of Philosophy, 14(1): 115–123.
- Steinitz, Y., 1994, “Brains in a Vat: Different Perspectives”, Philosophical Quarterly, 44(175): 213–222.
- Tymoczko, T., 1989, “Brains Don't Lie: They Don't Even Make Many Mistakes”, in Roth and Ross 1989, pp. 195–213.
- Warfield, T.A., 1992, “A Priori Knowledge of the World: Knowing the World by Knowing Our Minds”, Philosophical Studies, 92: 127–147; reprinted in DeRose and Warfield (eds.) 1999, pp. 76–92.
- Wright, C., 1992, “On Putnam's Proof That We Are Not Brains-in-a-Vat”, Proceedings of the Aristotelian Society, 92: 67–94.