Ram Neta, University of North Carolina, Chapel Hill
Section 1: The Basing Relation
So-Hyun sees a Chinese Crested dog, and she recalls that hairless dogs that look like that are typically Chinese Crested dogs. At the very same time, her friend Adede points at the dog and says “look at that Chinese Crested dog right there!” So-Hyun believes that the dog is a Chinese Crested.
So-Hyun has at least two independent reasons to believe that the dog is a Chinese Crested. One reason is that she heard Adede just say so. And another reason is that, as she recalls, hairless dogs that look like that are typically Chinese Crested dogs. But, even though she has two independent reasons to believe that the dog is a Chinese Crested, it’s at least possible, given my description of the case so far, that only one of those is a reason for which she believes it. This possibility shows that there is a difference between reasons that one has to believe and reasons for which one believes — a distinction in epistemology that is analogous to the distinction that some philosophers of action mark by distinguishing “normative” reasons that one has (reasons that one has to act) from “motivating” reasons (reasons for which one acts). But what does this difference consist in?
Donald Davidson tried to explain the difference between reasons that one has to act and reasons for which one acts as a difference consisting in the fact that the latter must be, but the former need not be, reasons that cause one’s action.[1] Although this view has been widely accepted, some have objected to the claim that our intentional actions are caused by the reasons for which we do them.[2] Rather than get involved in this controversy, let me try to locate Davidson’s insight in a way that does not take on controversial commitments about causation. Davidson’s insight, stated uncontroversially, is this: a reason for which you act is always a reason why you act. Or, as some philosophers would put the point, “motivating” reasons are always “explanatory” reasons.
This point holds true not just of action, but also of belief: a reason for which you believe is always a reason why you believe. If So-Hyun has two reasons to believe that the dog is a Chinese Crested, and she believes it for only one of those reasons, then the reason for which she believes it must also be a reason why she believes it.
Now let me add a stipulation to the case of So-Hyun and Adede: So-Hyun noticed the Chinese Crested dog only because Adede pointed at it and called it a “Chinese Crested” (a phrase that got So-Hyun’s attention, and jogged her recall), but So-Hyun doesn’t at all trust Adede’s judgment in these matters. Adede’s testimony, let me stress, is fully trustworthy, and So-Hyun’s evidence indicates as much, but So-Hyun doesn’t respond appropriately to her evidence on this issue, and simply doesn’t trust Adede’s testimony. So Adede’s testimony is a normative reason to believe that the dog is a Chinese Crested, and, since So-Hyun is aware of her testimony, and has evidence that indicates its trustworthiness, it is also a normative reason that So-Hyun, at least in some sense, has. In this case, the reason for which So-Hyun believes that the dog is a Chinese Crested is that, as she recalls, hairless dogs that look like that are typically Chinese Crested dogs. The reason for which she believes it is not that Adede pointed it out as such, for So-Hyun doesn’t trust what Adede says. But Adede’s pointing out the Chinese Crested as such is still a reason why So-Hyun believes that the dog is a Chinese Crested, since Adede’s behavior is what explains why So-Hyun notices the dog and recalls what Chinese Cresteds look like in the first place. It follows that, even if all reasons for which we believe are reasons why we believe, still, not all reasons why we believe are reasons for which we believe.
So even if you have a reason to believe that is also a reason why you believe, it doesn’t follow that it is a reason for which you believe. Reasons that are both reasons to and reasons why need not yet be reasons for which. And, while all reasons for which are reasons why, not all reasons for which are reasons to.
Although we’ve been focused on the case of So-Hyun’s belief, the points that we’ve made generalize, and they generalize beyond beliefs. There are the reasons for which someone raises her hand, the reasons for which she is angry, the reasons for which she intends to drink a toxin, the reasons for which she prefers eating at home to eating out, and the reasons for which she chooses the road less travelled. More generally, there are reasons for which an agent, I will say, is in a “rationally determinable condition” — whether that condition takes the form of a belief, a judgment, an emotion, an intention, a preference, a choice, or an action. The project of this paper is to gain a better understanding of reasons for which — or rather, of the relation that they bear to the rationally determinable conditions for which they are reasons.
Epistemologists sometimes use the phrase “the basing relation” to denote the distinctive kind of explanatory relation that there is between a reason and the belief for which it is a reason. I will generalize this usage of “the basing relation” to cover the relation between a reason and the intention, or action, or judgment, or emotion, or choice, or preference for which it is a reason: more generally, it is the relation between a reason and the rationally determinable condition for which it is a reason. Using the phrase in this general way, I can now state the goal of this paper:
In this paper, I will give an account of the basing relation.
In the next section, I will consider one seemingly plausible account, and argue that it is, at best, incomplete.
Section 2: Basing and the Varieties of Defeat
As we’ve told the story, So-Hyun has two reasons to believe that the dog in front of her is a Chinese Crested. One reason is that, as she recalls, dogs that look like that are typically Chinese Cresteds. The other is that Adede said that the dog is a Chinese Crested. But while So-Hyun has both of these reasons to believe it, only one of these is the reason for which So-Hyun believes it: specifically, the reason for which So-Hyun believes it is the first, but not the second, of the two reasons just enumerated. In virtue of what is one, but not the other, a reason for which So-Hyun believes it?
I’d now like to articulate one plausible proposal. Consider what happens if So-Hyun gets evidence that, contrary to her recollection, Chinese Cresteds do not typically look like the dog in front of her. This evidence will defeat the doxastic justification of So-Hyun’s belief that the dog in front of her is a Chinese Crested. One indication of this defeat is that it would typically be rational for So-Hyun to respond to such evidence by reducing her confidence, or perhaps even suspending her belief, that the dog in front of her is a Chinese Crested. But now consider what happens if So-Hyun gets evidence that Adede did not say that the dog was a Chinese Crested. This evidence will not defeat the doxastic justification of So-Hyun’s belief that the dog in front of her is a Chinese Crested. One indication of this lack of defeat is that it would not be rational for So-Hyun to respond to such evidence by reducing her confidence that the dog in front of her is a Chinese Crested. According to the present proposal, it is her recollection of the appearance of Chinese Cresteds, but not Adede’s testimony, that is the reason for which So-Hyun holds her belief, and this is because the doxastic justification of So-Hyun’s belief that the dog is a Chinese Crested can be defeated by defeating her justification for believing the former reason, but cannot be defeated by defeating her justification for believing the latter reason.
Just as beliefs can be doxastically justified, so too can intentions, actions, choices, preferences, and emotions be justified. More generally, rationally determinable conditions (henceforth, RDC’s) can be justified by virtue of being based in the right way on justifying reasons. I take doxastic justification therefore to be just one species of a broad genus, and I will use the phrase “RDC justification” to denote this genus. Just as doxastic justification can be defeated, RDC justification more generally can be defeated. And just as the defeat of doxastic justification is typically indicated by its being rational for the agent to suspend belief, so too the defeat of RDC justification is indicated by its being rational for the agent to suspend her RDC.
In general, then:
R is a reason for which A is in rationally determinable condition C = A’s being in C can have its RDC justification defeated by defeating A’s justification for accepting R.
Although this account of the basing relation appeals to normative terms (like justification and defeat) in order to explain basing, this does not strike me as problem with the account: there’s no reason to think that we can or should try to explain basing in non-normative terms. Furthermore, this account seems to make at least many, if not all, of the right predictions concerning what stands in the basing relation to what. Perhaps there are cases in which we might wish to say that a creature believes, or intends, or acts, for particular reasons, but for which the account above makes the wrong predictions: but if there are such cases, I’m inclined to think that they show only that our ordinary use of the phrase “reasons for which” is poorly regimented.[3]
So there’s much to be said in favor of the account above. Are we done?
No. This account, even if extensionally correct, suffers from two shortcomings. The first is that the account explains things the wrong way round. The fact that A’s C’ing can have its RDC justification defeated by defeating A’s justification for accepting R seems to be explained by the fact that R is a reason for which A C’s, but the account says that the former is what explains the latter.
The second problem, which is related to the first, is that the account fails to explain a puzzling and important phenomenon concerning the defeat of RDC justification. In the remainder of this section, I will state this phenomenon, and then say why the account proposed above fails to explain it.
As we’ve told the story about So-Hyun, the reason for which she believes that the dog is a Chinese Crested is that, as she recalls, hairless dogs that look like this are typically Chinese Crested. Here’s a diagram:
Now consider the variety of ways in which So-Hyun could fit the description that I’ve given, and nonetheless be unjustified in believing that this dog is a Chinese Crested. This could happen if So-Hyun has an opposing, or overriding, defeater to which she is insufficiently sensitive: for instance, she could have, and ignore, independent evidence that the dog in front of her is not a Chinese Crested. Such a defeater directly attacks the rightmost element in the picture above. Or she could have an undercutting defeater to which she is insufficiently sensitive: for instance, she could have, and ignore, independent evidence that her recall is very poor when it comes to information about the appearance of dog breeds. Such a defeater directly attacks the leftmost element in the picture above. Or, finally, even if she is fully justified in believing that dogs that look like that typically are Chinese Cresteds, she could still have, and ignore, independent evidence that the particular dog in front of her is very atypical of its breed. The first kind of defeater can oppose her justification for believing that the dog is a Chinese Crested, even though it does nothing to defeat her justification for believing that hairless dogs that look like this are typically Chinese Crested. The second kind of defeater can undercut her justification for thinking that the dog is a Chinese Crested by virtue of defeating her justification for believing that hairless dogs that look like this are typically Chinese Crested. But the third kind of defeater has a different effect from either of the others: it defeats her justification for thinking that this dog is a Chinese Crested, but it does not oppose her justification for thinking this, nor does it defeat her justification for believing that dogs that look like this are typically Chinese Cresteds. How does this work? How can her justification be defeated without being opposed, and without defeating her acceptance of the reasons that supply that justification? This kind of defeater would need directly to attack the middle element in the picture above, rather than the rightmost or leftmost element. But how should we understand the middle element, so as to make sense of this possibility of defeat?[4]
It may seem that there is an obvious answer to our questions about how this third kind of justification defeat or augmenting works — namely, that we have so far identified So-Hyun’s reason for her belief much too narrowly, as consisting merely in the fact (or proposition, or state of apparent recollection) that dogs that look that way are typically Chinese Cresteds. But this latter — it might be thought — is just the tip of a whole iceberg of reasons that So-Hyun has for her belief. Once we expose the rest of the iceberg, it will become obvious that there’s nothing unusual about the kind of justification defeat or augmenting that we’ve considered: it is nothing other than the kind of justification defeat or augmenting that we get when we add a new piece of evidence to an agent’s total body of evidence, and thereby affect the agent’s degree of justification for some RDC that she makes on the basis of this total body of evidence.
This response is too quick. Suppose we specify the whole of So-Hyun’s reason to believe that the dog is a Chinese Crested — however extensive that whole body of reasons is. Indeed, let it be her total body of evidence, and let her belief be doxastically justified, by virtue of being based in the right way on this total body of evidence. Now suppose that So-Hyun gains one additional piece of evidence, and it leaves her justification for accepting the truth of every proposition in that whole body of evidence completely unaffected, but it does affect her justification for thinking that that the whole body of evidence supports her belief that the dog in question is a Chinese Crested. Perhaps an eminent mind-reading statistician assures her that, much as it may seem to her as if the rest of her total evidence supports the proposition that the dog in question is a Chinese Crested, in fact it does not do so. This new piece of evidence need not affect the justification that So-Hyun has for accepting any of the rest of her evidence; nonetheless, it defeats the doxastic justification of her belief that the dog in question is a Chinese Crested.[5] So, no matter how extensive So-Hyun’s reasons to believe that the dog is a Chinese Crested, the doxastic justification of her belief can be defeated (or augmented) without affecting her justification for accepting those reasons.
It is typical for epistemologists to distinguish opposing defeat from undercutting defeat. But what this discussion has shown is that there are actually three kinds of defeat.
Opposing defeat: an agent’s justification for C’ing is defeated in virtue of reasons to not C, and independently of any effect on the justification of her reasons for C’ing
Undercutting defeat: an agent’s justification for C’ing is defeated in virtue of defeating her justification for accepting her reasons for C’ing
Side defeat: an agent’s justification for C’ing is defeated in virtue of reasons to doubt that her reasons for C’ing provide justification for C’ing, and independently of any effect on the justification of her acceptance of those reasons, and independently of any other reason to not C.
Pictorially:
The question I want to raise now is: why is there such a thing as side defeat? In other words, why does reducing an agent’s justification for thinking that her reasons provide justification for her belief work to defeat the justification for her belief?
Notice, by the way, that the standard distinction between higher-order defeaters and first-order defeaters cuts across this three-fold distinction between opposing, undercutting, and side defeat. There could be first-order opposing defeat, higher-order opposing defeat, first-order undercutting defeat, higher-order undercutting defeat, and so on.
The account of the basing relation proposed at the beginning of this section — the account that explains basing in terms of undercutting defeat — does nothing to help us understand how side defeat works. But, just as basing is related to undercutting defeat, so too it is related to side defeat: if R is a reason for which A C’s, then the justification of A’s C’ing can be defeated not simply by defeating R, but also by gaining reasons to doubt the connection between R and C’s RDC justification. If we’re going to understand basing in terms of its relation to defeat, then we need to understand why it is related in the way that it is not merely to undercutting defeat, but also to side defeat.
Section 3: A Dilemma About Side Defeat
We’ve just identified a distinctive kind of justification defeat — the defeat of one’s justification that does not involve opposing defeat, but also does not involve any defeat of one’s justification for the propositions that constitute one’s reasons. Whenever we believe something for a reason, our justification for this belief can be defeated or augmented in this distinctive kind of way, viz., by defeating or augmenting one’s justification for a proposition that is neither one’s belief, nor one’s reason to believe, but that is related (in some yet to be specified way) to one’s reason to believe. What we want to know is: how does this phenomenon of “side defeat” work? What is the basing relation such that it admits of such defeat?
In this section, I will articulate a dilemma that we must confront in answering this question.[6]
Our question now is: how can an agent’s C’ing (based on R) have its RDC justification defeated, but not by gaining reasons to doubt R, nor by gaining reasons against C’ing?
We cannot plausibly avoid this question by claiming that side defeat is a brute normative phenomenon, insusceptible of deeper explanation. There may be some brute normative phenomena, but side defeat is not among them.
One way to answer our question about how side defeat works is by claiming that, for an agent to C, based on reason R, involves the agent’s believing that R supports, or is a good normative reason to, C. Side defeat is possible because, when this latter belief is defeated, then so is her C. Let’s call this the “representationalist” explanation of side defeat. Part of what makes this representationalist explanation plausible is that, when an agent C’s for the reason R, the agent is, in some sense, committed to R’s supporting, C; and defeating this commitment will at least typically defeat the agent’s justification for C’ing.
Another way to answer this question is by claiming that, for an agent to C, based on a reason R, involves the agent’s exercising a disposition to C when she accepts R. Side defeat is possible because, when she has reason to distrust this disposition, then she has reason to doubt C. Let’s call this the “dispositionalist” explanation of side defeat. Part of what makes this dispositionalist explanation plausible is that, when an agent C’s for the reason R, the agent is exercising a disposition to C when R[7]; and acquiring reasons to distrust this disposition will at least typically defeat the agent’s justification for C’ing.
In this section, I will argue that neither of these two explanations – neither the representationalist nor the dispositionalist explanation – can work.
Suppose that, with the representationalist, we try to explain side defeat by claiming that basing C on R involves believing that R supports, or is a normative reason to, C, and side defeat involves defeat of this belief. On that view, whenever one C’s for the reason R, one also believes that R is a reason to C, and one’s justification for C can be defeated by defeating one’s justification for believing that R is a reason to C. The problem with this view, as I will now argue, is that the explanation that it gives for side defeat specifies a condition that is not necessary for side defeat.
Suppose that we gain extremely compelling empirical evidence for the following psychological hypothesis: someone who has an accurate belief about the support relations between a particular reason R and a particular rationally determinable condition C is thereby very likely to suffer from a very localized cognitive disorder in which their being in C is not responsive to, or controlled by, their having reason R. Indeed, the more accurate a person’s beliefs about the support relation between R and C, the more likely the person is to suffer from this localized cognitive disorder concerning their being in C. People who do not at all suffer from this disorder – and so whose rationally determinable conditions are fully responsive to, and controlled by, their normative reasons for those conditions – tend to have almost no accurate beliefs about support relations (either because they don’t have beliefs about such support relations at all, or because the beliefs that they have are inaccurate).
If we gain enough evidence to become justified in believing this psychological hypothesis, then we will find ourselves in the following bizarre but nonetheless metaphysically possible type of scenario: we can recognize what normative reasons we have for our RDC’s, and we can recognize the support relations that those reasons lend to those RDC’s, but, by virtue of our justification for believing the psychological hypothesis above, we will also be justified in believing that the very RDC’s that we take to be supported by the reasons that we have are also not responsive to our having those reasons. And if we are justified in believing that an RDC is not responsive to our reasons for it, then, even if we have justification for being in that RDC, that is not a justification that the RDC itself, once we’re in it, can enjoy. (Smithies forthcoming provides cases of propositional justification that cannot be leveraged into doxastic justification: e.g., evidential justification for believing both p and that I do not believe that p.) Our RDC’s themselves will be side defeated.[8] And so we cannot successfully explain side defeat by saying that it is defeat of the belief that R supports my C’ing, since defeating such a belief is not necessary for side defeat. Side defeat could instead result from my justifiably believing that my RDC is not responsive to whatever reasons I have for it.
Next, let’s suppose that, with the dispositionalist, we try to explain side defeat by claiming that basing C on R involves not a belief but rather a disposition to C when one accepts R. In that case, whenever one C’s for the reason R, one also has the disposition to C when R, and one’s justification for C can be defeated by giving one reason to distrust this disposition, i.e., to think that this disposition leads one to C erroneously or incorrectly. The problem with this view, as I will now argue, is that the explanation that it gives for side defeat specifies a condition that is not sufficient for side defeat.
Suppose that, although we have the disposition to C when R, and we do not have the disposition to C when R* (≠R), we also have justified, false beliefs about what dispositions we have, and we lack some true beliefs about what dispositions we have. In particular, we falsely, though justifiably, believe that we do not have the disposition to C when R; we also falsely, though again justifiably, believe that we do have the dispositions to C when R*; and finally, we do not believe (what is in fact the case) that we have the disposition to C when R, and we do not believe (what is in fact the case) that we do not have the disposition to C when R*. The dispositionalist cannot plausibly deny that this is possible: People generally have plenty of justified, false beliefs about what dispositions they have, so why should not they also have plenty of justified, false beliefs about what RDC-forming dispositions they have?
Now, if we have all of these justified, false beliefs, and lack all of these true beliefs, then we could find ourselves in the following bizarre but nonetheless metaphysically possible type of scenario: we have R, we are disposed to C when R, we exercise that disposition and consequently C, but we do not believe that we are exercising that disposition, and we also falsely but justifiably believe that we C not on account of R but rather on account of R*. Furthermore, let’s suppose, we now acquire evidence for the hypothesis that R does not support C. In such a case, since we are fully justified in taking there to be no connection whatsoever between the evidence that R does not support C, on the one hand, and our C’ing, on the other hand, our acquisition of evidence for the hypothesis that R does not support C gives us no reason whatsoever not to C, and so our C’ing is not defeated. A fortiori, C’ing is not side defeated. And so we cannot successfully explain side defeat by saying that it is constituted by our getting reasons to distrust the reliability of our disposition to C when R, since our getting such reasons is not sufficient for side defeat.
We are trying to explain side defeat by specifying what it is about the basing relation that makes it liable to suffer from side defeat. What we have shown so far is that this feature of the basing relation is neither the belief that the reason supports C, nor is it a disposition to C when one accepts the reason: defeat of the former is not necessary for side defeat, and having a reason to distrust the latter is not sufficient for side defeat. But what, then, is the feature of the basing relation that explains side defeat?
Could it be the conjunction of the two features considered: both the belief that R supports C, and the disposition to C when accepting R? No. This proposal doesn’t fix either of the problems affecting the preceding proposals. If an agent has undefeated justification for accepting R, and undefeated justification for accepting that R supports C, and she also has the disposition to C when accepting R, she could still suffer side defeat by having compelling evidence for the psychological hypothesis (which would, in this case, be false, given that she has the disposition to C when R) that her C’ing is unresponsive to her reasons for C’ing. So this conjunctive proposal doesn’t work.
Recall that at least part of what makes the representationalist explanation of side defeat plausible is that, when an agent C’s for the reason R, the agent is, in some sense, committed to R’s supporting, C; and defeating this commitment will at least typically defeat the agent’s justification for C’ing. But the problem with the representationalist account of side defeat is that it locates side defeat as defeat of a belief concerning a particular relation of normative support between R and C, but side defeat can be constituted not by defeating that belief, but rather by calling into question the explanation for C’ing. This is a problem that can arise so long as the representationalist takes the belief that is defeated in side defeat to be something that does not itself fix the reasons why one C’s.
Recall that at least part of what makes this dispositionalist explanation of side defeat plausible is that, when an agent C’s for the reason R, the agent is exercising a disposition to C when R[9]; and acquiring reasons to distrust this disposition will at least typically defeat the agent’s justification for C’ing. But the problem with the dispositionalist account of side defeat is that it locates side defeat as involving reasons for calling into question the reliability of one’s disposition to C when R, but side defeat can fail to be constituted by such reasons, since one can have justified, false beliefs about what dispositions one has. This is a problem that can arise so long as the dispositionalist takes the dispositions to C when R to be dispositions about which one can have fully justified, false beliefs.
In order to avoid the objection that we’ve posed, the representationalist must take the belief that is defeated in side defeat to be a belief that fixes the explanation of one’s C’ing. In order to avoid the objection that we’ve posed, the dispositionalist must take the dispositions that explain one’s C’ing to be dispositions about which the agent cannot have fully justified but false beliefs. In the next two sections, I will show how we can perform both of these two fixes at once, and finally arrive at a view of the basing relation that enjoys the plausibility of both representationalism and dispositionalism, while suffering from neither of the objections posed above.
Section 4: Basing as the Use of a Demonstrative Concept
As I’ve argued in the preceding section, while basing might involve the belief that R supports C, and it might involve the disposition to C when one accepts R, neither of these conditions on basing can explain side defeat.
So what is it about the basing relation that explains side defeat? On the account that I develop in this section and the next, the basing relation involves the use of a demonstrative concept to refer to something in one’s own psychology. In order to spell out this account, I must say specifically what it is in one’s psychology to which one is thus referring, and I must also say more about the distinctive kind of demonstrative concept by means of which one refers to it. But before turning to either of these two main tasks of this section, first a few background observations.
Whether or not there are non-conceptual demonstratives, one feature of every demonstrative concept is that it involves some general, non-demonstrative concept. To conceptually ostend an object is always to ostend it as of some general kind or other. You can ostend that color, that shape, that occurrence, that thing, that sound, and so on, but all of these demonstrative concepts involve some non-demonstrative concept (i.e., color, shape, occurrence, thing, sound, etc.) Note that this does not imply that demonstrative reference requires that the referent actually fall into the extension of the general concept that partly constitutes one’s demonstrative concept, nor does it imply that one be justified in believing that it fall into that extension. Demonstrative reference can, for all I say here, presuppose lots of false and unjustified belief about the object to which one refers.
When an agent uses a demonstrative concept, that agent can be justified or not, and correct or not, in applying the concept that is partly constitutive of that demonstrative concept to the thing ostended by that demonstrative concept. If the correctness of the application manifests the concept-applier’s skill in applying it, then the application is not merely correct and justified, but also knowledgeable.
The basing relation, on the view that I propose here, is the use of a demonstrative concept to refer to a particular thing in one’s own psychology. Given what I’ve said just now, this form of deixis will have to employ a general concept, and it will be more or less accurate, more or less justified, and more or less knowledgable, depending upon whether the thinker is correct, justified, or knowledgable, in applying that general concept to the particular referent of the deixis. But what general concept will this deixis employ? And to what will it refer? Answering those questions will require just a bit more background.
We’ve so far been using the term “RDC” to refer to anything that can be based upon a reason, and we’ve used the phrase “RDC justification” to refer to the kind of justification that such rationally determinable conditions can enjoy by virtue of being based in the right way on adequate reasons. When a particular RDC (i.e., belief, intention, emotion, etc.) is RDC justified, there is something that makes it so. This is, typically at least, its being based upon the reason upon which it is based. This is what we’ll call the “RDC justifier” of the RDC. The RDC justifier of a particular condition will include not merely the normative reason that one has for being in the condition, but also everything that makes the condition be based in the right way on that normative reason.
It has required some work to isolate the concept of a RDC justifier. But, while it has required work to isolate this concept, and there is no easy way of expressing this concept in ordinary English, this does not imply that the concept itself is not an ordinary one. In fact, the concept is possessed by anyone who is capable of asking a particular kind of ordinary “why?” question, or understanding a particular kind of ordinary “because” statement: those questions and statements that concern reasons for which. (Children who are old enough to ask “why did the chicken cross the road?” have the relevant concept, though I don’t know whether pre-linguistic infants have it.) Someone can understand such questions well enough to know when they arise and when they do not, and to be able to assess potential answers to them as more or less relevant, even if she does not have any term in her vocabulary corresponding to our technical term “RDC justifier”.
With these remarks in the background, I can now start to spell out my account of the basing relation. For an agent A to C for the reason that R involves A’s ostending an explanatory relation between R, on the one hand, and her own C’ing, on the other, and to ostend it under the concept RDC justifying. While performing this act of ostension might require an agent to have various beliefs and dispositions, the act of ostension itself is neither a belief nor a disposition.
Notice that it is possible for you to treat something as a reason to C even when you cannot say what your reason to C is, and even when you believe that you have no good reason to C: especially irrational agents do this sort of thing often, and most of us do it sometimes. There might be reasons for which I am angry at my neighbor, but I might think that, whatever those reasons are, they are almost certainly not good reasons. Still, if they are reasons for which I feel that way, and not merely reasons why I feel that way, then there must be some part of me that is treating those reasons, whatever they are, as RDC justifying my anger.
So basing involves using a demonstrative concept that contains the general concept RDC justifier to ostend an explanatory relation between one’s C’ing and one’s reason R. Just as it is possible to ostend the visible distance between two objects even when one is ignorant or mistaken about what those two objects, so too is it possible to ostend the explanatory relation between R and one’s C’ing, even when one is ignorant or mistaken about what R and C are. The basing relation can therefore obtain even between relata that are unknown to, or misidentified by, the agent.
I’ve so far given one necessary condition on the basing relation: it involves using a demonstrative concept containing the general concept RDC justifier to ostend an explanatory relation between one’s C’ing and one’s reason R. But does basing involve any further conditions on the explanatory relation that is thereby ostended? What do we need to add to the claim above in order to get a full account of basing? The next section answers that question.
Section 5: Completing my Account of Basing
The basing relation obtains between some reason R, and some RDC C, whenever R is the reason for which an agent C’s. Now it is time to say precisely what this relation amounts to. Recall that one thing we wanted from an account of basing is that it explain side defeat, and we introduced the use of a certain kind of demonstrative concept as the component of basing that gets defeated in cases of side defeat. Another condition of adequacy on our account is that basing involve an explanatory relation: for R to be the reason for which A C’s, it must at the very least be a reason why A C’s. So how shall we build an account of basing that satisfies both of these two conditions? I will consider three proposals.
Proposal 1: Basing is simply the conjunction of our two conditions, viz., R is the reason for which A C’s = R is a reason why A C’s, and A uses the concept RDC justifier to ostend an explanatory relation between R and her C’ing.
This first proposal is subject to clear counterexample in cases in which R is a deviant reason why A C’s, but A nonetheless incorrectly ostends a distinct explanatory relation between R and C under the concept RDC justifier. Consider, for instance, Davidson’s example of the climber who wants to let go of the rope and let his companion fall; this desire makes the climber so nervous that he trembles, and this trembling causes him to let go of the rope. The climber might mistakenly ostend some explanatory relation between his desire to let go of the rope and his letting go of the rope under the concept RDC justifier, but this would not suffice to make it the case that the reason for which he let go of the rope was that he wanted to do so.
The problem with this proposed account of basing — what seems to render it subject to counterexample — is that the explanatory condition and the ostending condition are treated as independent.[10] An adequate account of basing should connect these conditions to each other more closely. This suggests a second possible account of basing.
Proposal 2: Basing is the obtaining of the treating condition, caused by the obtaining of the explanation condition, viz., R is the reason for which A C’s = R is a reason why A C’s, and A uses the concept RDC justifier to ostend that very explanatory relation between R and her C’ing.
This second proposal has the virtue of satisfying our two constraints on an account of basing. And it also has the virtue of relating the two conditions so as to avoid the sorts of counterexample just described. But it has the vice of being subject to still other counterexamples. In particular, it fails to handle cases in which R is a deviant reason why A C’s, but A treats R as RDC justifying her C’ing because of R’s being a reason why A C’s. We can construct an example of this kind by modifying Davidson’s case of the climber slightly. Suppose that the climber’s desire to let go of the rope not only causes him to be so nervous as to let go of the rope, but furthermore, in causing him to do this, it also causes him to use the concept RDC justifier to ostend that very same explanatory relation between his desire and his letting go of the rope. This still would not make his desire the reason for which he lets go of the rope.
So, though the two conditions on basing need to be connected in order to secure a proper account of basing, the connection needs to be of the right kind. Let’s try once more.
Proposal 3: Basing is the obtaining of the explanation condition in virtue of the obtaining of the treating condition, viz., R is the reason for which A C’s = A uses the concept RDC justifier to ostend an explanatory relation between R and her C’ing, and in virtue of that fact, R is a reason why A C’s.[11]
Here, at last, we’ve reached an account that satisfies our two conditions on an account of basing, and can also handle all the cases correctly. The basing relation is an explanatory relation (a “reason why” relation) that obtains in virtue of our demonstratively referring to that very relation under the concept RDC justifier. Side defeat happens when, and because, the agent’s justification for applying the concept of RDC justification to the ostended explanatory relation is defeated. Recall that defeating the agent’s reason to believe that R supports C was not necessary for side defeat (since side defeat could occur even while the agent was still justified in believing that R supports C, if she was also justified in believing that her C’ing is not RDC justified), whereas defeating the agent’s reason for trusting her disposition to C when R was not sufficient for side defeat (since side defeat could fail to occur, so long as the agent was justified in trusting what she justifiably took to be the causally relevant disposition). The present account of side defeat avoids the problems of each of these other proposals. The ostended explanatory relation is identical to the real explanatory relation, since the latter is real only by virtue of being ostended. And the application of the concept of RDC justification to that explanatory relation is defeated by any justification for suspending the resulting RDC.
This may strike some philosophers as metaphysically odd: how can an explanatory relation obtain in virtue of our referring to it in thought? It can help to mitigate the sense of oddity to think of other cases in which an explanatory relation obtains in virtue of our referring to it. I say “I hereby pronounce you husband and wife”, and in virtue of saying these words, I bring into being the very same relation that the words describe. I think “I hereby think a self-referential thought”, and in virtue of thinking this, I bring into being the very same thought to which my thought refers. These cases are examples of “conjuring”, in the literal sense of that term. And so I say that, on my account, the basing relation is an act of conjuring.
I conclude that proposal 3 is correct. Basing is when an explanatory relation between R, on the one hand, and A’s C’ing, on the other, obtains by virtue of A’s ostending that explanatory relation under the concept of RDC justification.
You can ostend a thing only if that thing exists. And so you can ostend an explanatory relation only if that explanatory relation exists. Thus, an agent’s ostending an explanatory relation under the concept RDC justifying is possible only if that explanatory relation obtains. But the explanatory relation is grounded in the ostensive act. It follows that the ostensive act that we’ve described is both necessary and sufficient for the obtaining of the ostended explanatory relation. Since the explanatory relation is the basing relation, it follows that the basing relation obtains when and only when an agent ostends that relation under the concept RDC justifying.
How does basing, so understood, explain side defeat? Side defeat is the defeat that occurs when the agent’s act of conceiving of this explanatory relation – the explanatory relation between R and C – as one of RDC justification is itself defeated. Side defeat directly attacks an agent’s justification for subsuming this explanatory relation under the concept RDC justification.
Consider again the representationalist’s proposal that side defeat is constituted by having reason to doubt one’s belief that R supports C. This proposal failed because one could have no such reason, and still suffer from side defeat by virtue of having evidence that one’s C’ing was not responsive to R. The only way to fix this problem with representationalism is to say that one’s belief about the relevance of R to C fixes the reason why one C’s. And this is what we’ve done here: the explanatory relation between R and one’s C’ing is itself grounded in one’s ostending this explanatory relation under the demonstrative concept of RDC justifier, and to perform this ostensive act is to be committed to a relation of normative support between R and C.
Consider again the dispositionalist’s proposal that side defeat is constituted by having reason to doubt the reliability of one’s disposition to C when R. This proposal failed because one could have such reason, and yet be fully justified in taking it to be irrelevant to the justification of one’s C’ing. The only way to fix this problem with dispositionalism is to say that one’s disposition to C when R is not a disposition about which one could have fully justified but false beliefs. And this is what we’ve done here: the explanatory relation between R and C is the exercise of a disposition that occurs in virtue of one’s ostension of that very exercise as justificatory, and so in virtue of a fact to which one has privileged access. One might, of course, have false beliefs about various matters of fact to which one has privileged access – but these false beliefs cannot be fully justified, so long as one has privileged access to the facts that belie these beliefs.
There is a some similarity between the present account of the basing relation and the account proposed in Schroeder 2007: “For R to be the (motivating) reason for which X did A is for the fact that R was a subjective normative reason for X to do A to constitute an explanatory reason why X did A.” But the present account enjoys one noteworthy advantage over Schroeder’s: the latter does not solve the problem of the deviant causal chain, since it leaves open the possibility that R’s being a subjective normative reason for X to do A is connected by a deviant causal chain to X’s doing A. In contrast, my account does solve the problem of the deviant causal chain, since it restricts the kind of explanatory relation between R, on the one hand, and X’s doing A, on the other hand, to the kind of explanatory relation that obtains in virtue of (or, is metaphysically grounded in) our ostending it.[12]
Works Cited
Anscombe, G.E.M. 1957. Intention. Basil Blackwell: Oxford.
Boghossian, Paul. 2012. “What is Inference?” Philosophical Studies 169: 1 – 18.
Davidson, Donald. 1963. “Actions, Reasons, and Causes.” Journal of Philosophy 60: 685 – 700.
Hyman, John. 2015. Knowledge, Action, and Will. Oxford University Press: Oxford.
Lasonen-Aarnio, Maria. 2014. “Higher-Order Evidence and the Limits of Defeat.” Philosophy and Phenomenological Research 88: 314 – 45.
Lavin, Douglas. 2011. “Problems of Representationalism: Raz on Reason and its Objects.” Jurisprudence 2: 367 – 78.
Lord, Errol and Sylvan, Kurt. ms. “Prime Time (for the Basing Relation).”
Neta, Ram. 2013. “What is an Inference?” Philosophical Issues: A Supplement to Nous 23: 388 – 407.
Schroeder, Mark. 2007. Slaves of the Passions. Oxford University Press: Oxford.
Smithies, Declan. Forthcoming. “Ideal Rationality and Logical Omniscience.” Synthese
Titlebaum, Michael. 2014. “Rationality’s Fixed Point.” Oxford Studies in Epistemology 5: 253 – 94.
Valaris, Markos. 2014. “Reasoning and Regress.” Mind 123: 101 – 27.
Notes
[1] Davidson 1963.
[2] Anscombe 1957.
[3] Are there reasons for which the fly moves towards the light, or are there only reasons why it does so? I’m tempted to say the latter, but if someone wishes to say the former, I have no quarrel with them; the basing relation that I’m interested in understanding is a relation that bears on the justification of a RDC.
[4] Notice that, corresponding to these three different ways of defeating her justification for believing that the dog is a Chinese Crested, there are also three different ways of augmenting her justification for believing that the dog is a Chinese Crested. She could have independent evidence for believing that the dog is a Chinese Crested. Or she could have independent evidence, apart from the evidence provided by her recollection, that dogs that look like that are typically Chinese Cresteds. Or, finally, and most relevantly for our purposes, she could have evidence that the particular dog in front of her looks very typical of its breed. This last sort of evidence would increase her justification for believing that the dog is a Chinese Crested, though it would not provide any independent additional reason to believe that the dog is a Chinese Crested, nor would it provide additional reason to accept her reason for this belief, viz., that dogs that look like that are typically Chinese Cresteds.
[5] Some philosophers (Titelbaum 2014, Lasonen-Aarnio 2014) will deny this higher-order evidence can defeat So-Hyun’s justification for believing that the dog is a Chinese Crested. I leave it open that they are right about So-Hyun’s propositional justification for believing this. But such a view can’t be true of the doxastic justification of So-Hyun’s belief: the belief itself plainly becomes less justified when the believer has heard the testimony of the eminent mind-reading statistician.
[6] The dilemma that I present here is analogous to the one that Boghossian 2012 provides against different ways of understanding the “taking” condition on inference, but I focus on conditions of defeat, whereas Boghossian focuses on what constitutes inferring a conclusion from a premise. To the best of my knowledge, the dilemma was first set out in a fully general way in Lavin 2011.
[7] Hyman 2015 and Sosa 2015 both argue that the problem of the deviant causal chain can be solved by claiming that the causal relation involved in the “reason for which” relation is the kind of causation involved in the manifestation of a disposition.
[8] According to Valaris (2014), this scenario is not so much as possible. But I believe I have just made the case for its possibility.
[9] Hyman 2015 and Sosa 2015 both argue that the problem of the deviant causal chain can be solved by claiming that the causal relation involved in the “reason for which” relation is the kind of causation involved in the manifestation of a disposition.
[10] See also Lord and Sylvan, ms.
[11] This is the account that I rely on (without defending it) in Neta 2013.
[12] Thanks to David Barnett, Matthew Boyle, Doug Lavin, Matthew Kotzen, Kate Nolfi, Josh Schechter, and Alex Worsnip for helpful comments. Thanks also to the audience at the Rutgers Epistemology Conference 2015 for stimulating discussion.
Hi Ram, thanks for this paper. I want to ask you some Questions of the Sort I Hate.
They are: Isn’t your proposal too intellectualist? I.e., doesn’t it deny that animals, infants, etc. ever believe and act in ways that are based (in your sense) on reasons? More generally, doesn’t it make basing depend too much on something very cognitively sophisticated? And what, for that matter, is it to “ostend an explanatory relation”? Is this some kind of (“occurrent”) inner mental act? If so, what evidence is there that we do such things? And if not, what is it instead?
I will now cease asking these questions. They make me feel dirty.
Hey John,
Thanks for the dirty talk!
Recall that what I’m interested in explaining is how it’s possible for basing relations (whatever they are) to be defeated. In other words, how is it possible for a belief (or other rationally determinable condition) to be based on a conclusive justification, and yet fail to be justified?
So the category of “basing” that I’m interested in understanding is a kind of explanatory relation that can do a better or worse job at transmitting justifiedness from a basis to a resulting condition (belief, intention, action, or what have you).
Babies and beasts know lots of things, and they are ignorant of lots of things. But their representational states are not assessible in terms of rationality, or justifiedness. They are neither justified nor unjustified in believing what they do, no matter why they believe it. So the explanatory relation that obtains between their beliefs (et. al) and the causes of their beliefs is not the same kind of explanatory relation that I mean to be analyzing.
I hope that’s helpful!
Best wishes,
Ram
Hi Ram, thanks for this reply. It’s good enough for me! But here is one more question:
Are you open to the possibility that other animals’ representational states, though not assessable for justifiedness, etc., and so not related via basing in the sense you analyze here, could still be related via some weaker form of basing — say, perhaps, the one that Hyman and Sosa analyze in their books?
Hi Ram,
Thanks for the paper!
As you might expect, I am very sympathetic to this: “The only way to fix this problem with [the representationalist account of side defeat] is to say that one’s belief about the relevance of R to C fixes the reason why one C’s.”
But I wonder about the sort of case you use to raise trouble for the original version of representationalism. Your case involves getting evidence for the psychological hypothesis that one might: (i) believe R; (ii) believe that R supports C; and (iii) be such that one’s C-ing is not at all sensitive to R.
I am not sure we can make sense of this description, though. Suppose I believe R and also that A follows from R. I have a hard time seeing how I could fail to count as believing A on just these grounds (I focus on belief, but the basic point should generalize). Just by the description of the case, I must clearly take A to be true (at least if “follows from” signifies conclusive support). Similarly, not only am I committed to acting as if A were true, this is a commitment that I can be held to, by my own lights — after all it is not as if I can plead ignorance either of R or of the connection between R and A. So what more would I need to do to count as believing A on the grounds that R? [1]
(Of course this is not to rule out that I may have all sorts of attitudes that are incoherent when combined with believing A; but those attitudes would also be incoherent, and in just the same way, with believing R and that A follows from R anyway.)
My worry is that if the “psychological hypothesis” is so much as coherent, then we are in danger of hollowing out our concept of belief. If believing that P were analogous to writing down a sentence in a special notebook, then the possibility I deny would be unproblematic. But it seems to me that on pretty much any other account of belief that is not so.
[1]There is one clear way out. Perhaps my epistemic state is not properly integrated, so that there are some contexts in which I can correctly be described as believing R, and some other contexts in which I can correctly be described as believing that A follows from R, but no contexts in which both descriptions are correct. But this does not seem to be what the “psychological hypothesis” is about; moreover, in such a case the original belief ascriptions would come into question too.
… Actually, after writing the above comment it occurred to me that your account of deixis as the core of the basing relation might be profitably construed as a response to the problem of cognitive integration I mention in the footnote in my earlier comment (and which Matt Boyle and Zoe Jenkin raise in their comments to my paper below).
But I wonder about the ontological commitments of demonstrative reference to beliefs and relations among beliefs. Does such reference require beliefs to be particular things?
Hi Markos!
Thanks for engaging with my paper. You’ve asked precisely the question that I had hoped and expected you would ask. (In part, this portion of my paper is a response to your paper on reasoning that came out last year.)
I am happy to accept that (1) believing that p and (2) believing that p conclusively supports q, might be jointly sufficient for (3) believing that q. In fact, I am happy to accept that (1) and (2) are even jointly sufficient for (4) believing that q on the basis of p. But what I am not willing to grant is that (1) and (2) are jointly sufficient for (5) believing that q on the basis of p such a way that the belief that q is doxastically justified by being so based. (1) and (2) are consistent with the possibility of what I call “side defeat” in the paper or “improper basing” in the video – i.e. a basing relation that does not transmit justifiedness from basis to that which is based upon it. And I’m trying to develop an account of basing that explains precisely how that is possible.
Does that help? Are you still finding it hard to imagine someone being given evidence that favors the psychological hypothesis I describe (whether or not you think that, as a matter of metaphysical necessity, that hypothesis is false)?
l
Hi Ram,
Thanks for the reply! I will have to think more about this, but let me try to see if I have it right.
So, let’s take it as agreed that, on the relevant way of understanding beliefs about support, believing R and that A follows from R suffices for believing A on the basis of R. And the suggestion now is that even if I am (doxastically) justified in believing both R and that A follows from R, my belief in A might still fail to be doxastically justified, because I may also have reason to believe that I am not suitably sensitive to reasons for A.
To make this more concrete, is the following close to what your are thinking by “sensitivity” here? I might have reason to think that even if I had no reasons for A, I would still believe A — perhaps A is a belief that fits very well with my normative political views, for example, and I know I am prone to bias of this sort. (This is meant to be analogous to the case you construct against the dispositional view.)
This makes sense, but I am not sure I am totally convinced by it. I wonder, could one get out of it by arguing that we should count beliefs differently? So, the belief that I would have had in the case where I lacked any grounds for A but still believed A would fail to be doxastically justified; but my actual belief is based on good grounds, and so is doxastically justified.
I guess one might worry that once I get evidence that my belief would persist even in the absence of evidence, I should lose confidence that my actual belief is based on the evidence that I do have. (Perhaps that was the original worry?) But then couldn’t we also argue that I should also lose confidence in R, or in the claim that A follows from R? (After all the problem is supposed to be that I have evidence that my assessment of evidence is biased.)
Anyway, thanks, all this is very interesting.
Hi Markos!
You write: “I guess one might worry that once I get evidence that my belief would persist even in the absence of evidence, I should lose confidence that my actual belief is based on the evidence that I do have. (Perhaps that was the original worry?) But then couldn’t we also argue that I should also lose confidence in R, or in the claim that A follows from R? (After all the problem is supposed to be that I have evidence that my assessment of evidence is biased.)”
In response to your first parenthetical question, I answer “yes”. In response to your second question, I answer “no”. Do you think it’s possible for a particular belief that I have (in a proposition p) to be based on some evidence E, that provides compelling justification for p, even though I, the believer, mistakenly believe that my belief that p is based on some distinct evidence E’ (≠E)? I think it is. I don’t think that such error concerning our bases is impossible. Do you think it is impossible?
Hi Ram,
No, I don’t think that errors about basing are impossible in general; but I think we both agree that they are impossible in some cases, since basing is conjuring after all!
Oh, and about deixis: I don’t here want to commit too specifically about the nature of the deistic referent, other than to say that it is a en explanatory relation between two particulars. But whether the particulars so related are facts or states or events — I want to leave that as open as possible here. So basing can be as world-involving as you like, on the account I offer here.
Hi Ram, okay, here is a new question. You say that the account of basing you offer here is supposed to explain how it can transmit warrant or justification. But are you open to the possibility that a different form of basing could transmit various bad epistemic properties, or at least explain why a belief is unwarranted or unjustified? I am thinking here of Susanna Siegel’s work on “wishful seeing”, and her idea that a belief or perceptual state can be based in a desire in a way that makes it ill-founded: her conception of the relevant basing relation is clearly different from yours, and if you’re right it won’t be enough to explain what makes a belief well-founded, but do you think the ill-founding of a belief might require something less than this? It all sounds very disjunctivist, which makes me think you might go in for it.
Hey John,
Excellent question, to which my answer is “no”. Basing, on my view, may be either proper (involving the knowledgeable application of the concept ‘RDC justifier’ to the explanatory relation constituted by that very same concept-application) or improper (involving non-knowledgeable application of that same concept). Only the former can transmit justifiedness from basis to the RDC based upon it. Can improper basing transmit unjustifiedness? I’m inclined to think not. To say that improper basing transmits unjustifiedness is to say that improper basing is what explains its being the case that the resulting RDC lacks RDC justification (e.g., doxastic justification) BECAUSE the basis is unjustified. But I think that is wrong: whenever the basis is unjustified, the resulting RDC will also lack RDC justification, and that is true whether or not the basing relation between them is proper. So the propriety of the basing relation does not make a difference to whether the resulting RDC lacks RDC justification when the basis is unjustified.
That said, I feel quite confident that the cases to which Siegel calls attention are not cases of basing. Priming might be the reason why the object looks to me like a gun, but it is not the reason for which the object looks to me like a gun. Nothing could be a “reason for which” anything looks anyway at all to me: perceptual experiences might furnish more or less justification to the beliefs based on them, but (contra Siegel) perceptual experiences are not themselves either more or less justified.
Hi Ram —
You correctly predicted that I would disagree with your paper! 🙂
I’d suggest that we’re 90-99% baby and beast (following up on your reply to John’s comment), so that almost all our cognition is structured the same way as theirs is, so if the basing relation doesn’t work for them, it doesn’t work for us most of the time, either. There’s an overlay of language and explicit conscious reasoning sometimes, but I’m disinclined to think that that’s where most of the real cognitive action is. So take an example like thinking someone is mad at you. I am justified in thinking that Kylie is mad at me, say, from some combination of her facial expression, her vocal tone, and my general knowledge about her and the situation. There’s a human sophistication to this, of course, but the cognitive mechanisms won’t be that far from the mechanisms involved when my dog thinks I’m mad at her, given that (as would ordinarily be the case) I’m not actually consciously thinking about my justification. But presumably I am justified and it is based on some combination of such factors.
Okay, so that’s part of my background perspective. In light of that perspective, here’s my question:
How do I do the ostension that is central to your account? Dilemma: If it’s conscious, then it’s rare, because it’s rare for us to consciously consider our justifications. If it’s unconscious, to establish that it’s going on, you would need to get into the nitty-gritty of cognitive mechanisms and the evidence for and against the presence of such unconscious ostensions, which you don’t really do here.
I’m sure you have a ready reply, since this is basically the intellectualizing issue that Kate Nolfi worried about in her comments at the APA and that John pointed briefly to above. Unfortunately, I’ve forgotten what you said in reply to this issue at the APA, so I’m kind of having to start over!
Excellent question Eric: thanks!
How can basing (which I take to be ostending) be something that I know, by a priori reflection alone, to be occurring, and yet fail to be conscious? In other words, how can my a priori knowledge of my own mental states and events extend beyond what is conscious?
Have I understood your question correctly?
If so, then let me ask: do you think that a priori reflection on my own linguistic behavior can provide me with knowledge of, say, the rules that I unconsciously follow in that behavior? Do you think that a priori reflection on my own behavior in a particular instance can provide me with knowledge that, say, what made me behave in that passive aggressive way to my colleague was that I was mad about his comment… ?
I think the answer to both of the two questions above, as well as plenty others of the same kind, is “yes”. Are we still together on that?
Finally, if a priori reflection can provide me with knowledge of which rules I’m unconsciously following, or which emotions I was unconsciously expressing in my behavior, then why can’t it also provide me with knowledge of which things (states, relations, acts) I’m unconsciously aware of?
So I reply to your “how is it possible?” question by saying something of the form “what could be easier? don’t we do things of this kind all the time?” But I want to see at precisely which points we disagree.
Also, Eric, there’s something I agree with and something I disagree with in your claim that we’re 99% baby and beast. Consider a pre-political tribe of 99 people who all live together under conditions of safety and abundance, and who neither need nor want any form of government. Now a leader persuades these 99 people to establish a state, and to appoint him as sole legislator, executive, and judge. Now there are 100 people. Together, they compose a state. There is something true, but also something false, in the claim that this state is 99% just the pre-political tribe that was there before.
Thanks for helping clarify, Ram. Fair enough on the 99% beast thing — there’s only so far, I think, that we can take the discussion at that level of abstraction.
I’m not sure I fully understand your reply about a priori knowledge of the ostension. Is the thought that the ostension is unconscious but based on an empirically observable pattern of reactions (where introspection I would say qualifies as a kind of empirical observation), I can know from a priori principles that inner ostension is occurring? That doesn’t really sound a priori to me, since it depends crucially on the empirical observations. (I don’t know a priori that there are 5 balls on the table just because I see 3 here and 2 there and know a priori that 3+2=5.) Not to quibble about “a priori” exactly — but rather to bring out that it sounds to me like your view is what I would call an empirical claim about mental structure, based on a certain range of empirical evidence. Is that right?
If so, then the follow-up thing I would wonder about is how good that evidential basis is, and whether there might be supporting or conflicting empirical evidence from cognitive psychology.