Featured Video Play Icon

The Revisability View of Belief

Grace Helton, University of Antwerp

[PDF of Grace Helton’s paper]

[Jump to Michael Bishop’s commentary]
[Jump to Neil Van Leeuwen’s commentary]
[Jump to Grace Helton’s reply to commentaries]

It is widely held that for some mental state to be a belief, it must be, in some sense or other, responsive to evidence (Adler, 2002; Currie & Ravenscroft, 2002; Gendler, 2008; Shah & Velleman, 2005; Velleman, 2000; cf. Bayne & Pacherie, 2005; Bortolotti, 2011).1 The claim that beliefs are in fact evidence-responsive is distinct from the normative claim that beliefs ought to respond to evidence. The descriptive claim says that if some mental state is not evidence-responsive in the appropriate way, it is not a belief, though it may be some other kind of cognitive attitude, such as an entertained thought, a pretense, or a non-doxastic delusion.

Though many theorists endorse the view that beliefs are necessarily evidence-responsive, this claim is rarely argued for.2 Instead, it is presented as an obvious conceptual truth or is simply presumed on the way to arguing for other claims (Adler, 2002; Currie & Ravenscroft, 2002; Egan, 2009; Gendler, 2008). The lack of a cogent defense of the view is particularly troubling in light of empirical evidence that many beliefs are formed in response to very poor evidence, or fail to be revised even when contravened by excellent evidence.3 If the view that beliefs are evidence-responsive requires that we exclude all such states from the class of belief, this may count as a reason to reject the view.

In this essay, I develop and defend a particular version of the view that beliefs are necessarily evidence-responsive. This is the revisability view of belief, which says that if some mental state is a belief, then that mental state must be nomically capable of being rationally revised in response to any bit of available, sufficiently strong evidence that conflicts with it. Since the revisability view does not require that beliefs be formed in response to evidence, but requires merely that existing beliefs can be revised in response to evidence, the view is compatible with evidence that beliefs are frequently formed in response to very poor evidence. The revisability view can also accommodate the fact that beliefs frequently are not revised in response to conflicting evidence. So long as such states have a certain capacity to be revised, they can count as beliefs.

That the revisability view can accommodate irrational beliefs shows merely that the view crosses a hurdle any view of belief must ascend. It does not suggest a positive reason to accept the view. The centerpiece of the paper is such an argument, the argument from the norm of revision. This argument moves from a claim about belief’s susceptibility to a certain norm of rationality, to the conclusion that all beliefs are capable of being revised in response to conflicting evidence. The key to this transition is a certain epistemic version of the principle ‘ought’ implies ‘can.’ Painted in the broadest of strokes, the argument is as follows:

    1. All beliefs are rationally required to be revised in response to conflicting evidence.
    2. If some mental state is rationally required to be revised in response to conflicting evidence, then that mental state can be revised in response to conflicting evidence.
    3. All beliefs can be revised in response to conflicting evidence.

In §1, I develop the revisability view of belief. In §2, I present the main argument in favor of the revisability view. In §3, I consider what predictions the revisability view makes of particular mental states, including faith-based religious views. In §4, I conclude.

0 Introduction

Before proceeding to my main arguments, I want to say something about the way I am conceiving of beliefs, and the method I am using to investigate them. I am presuming that if beliefs exist at all, they exist whether or not humans recognize them or regard them as existing. Beliefs are in this respect like atoms of calcium, Joshua trees, and wind currents. They are entities we must posit to explain some interesting range of empirical phenomena. Beliefs differ in this respect from both money and the Canadian province of Saskatchewan, which plausibly exist in virtue of something like implicit consent and authoritative decree, respectively. The metaphysical presumption that beliefs exist independently of human responses immediately rules out that we might merely decide or stipulate what beliefs are.

My strategy of isolating beliefs is primarily one of provisional reliance on a cluster of core claims typically associated with beliefs. On this strategy, both philosophical and ‘folk’ platitudes about beliefs can be useful in the initial stages of theorizing, but these platitudes should be treated as revisable in light of disconfirming evidence. There are other cases in which a cluster concept can help one pick out some entity even when the relevant cluster of features turns out not to obtain in that entity. For instance, in some contexts, the cluster concept the man drinking champagne in the corner can help one identify what is really a woman drinking vodka in the corner. Likewise, the typical cluster of claims associated with beliefs—that they are action-guiding, inferentially promiscuous, rationally coherent, like that, and so on—might turn out to be useful in picking out beliefs, and thus in ultimately discovering the nature of beliefs, even if the total cluster of claims should turn out not to hold of beliefs.4

One reason I treat platitudes about beliefs as provisional is that I take it to be a near-datum that we have beliefs. Thus, I am presuming that eliminativism about beliefs, on which humans do not enjoy beliefs at all, is a highly implausible view, one which is more implausible than at least some radically revisionist views about the nature of belief. This means that there are at least some cases in which, given the choice between doing without belief in our theorizing about human psychology, or adopting a highly revisionist theory of belief, we should embrace revisionism. For instance, if it should turn out that we have been massively mislead about the relation between belief and action, such that belief never guides action in the appropriate way, it may be that we should reject the view that beliefs are constitutively action-guiding, instead of concluding that humans do not enjoy beliefs.

Finally, a locutional note: I am using ‘belief’ to pick out a very broad range of states, including occurrent, merely dispositional, endorsed, and non-endorsed states. Thus, my ‘belief’ includes what are sometimes called judgments, where these are occurrent states that are not necessarily reflectively endorsed. ‘Belief’ further picks out both states that are produced by an automatic, non-conscious, fast, and heuristic process and states that are produced by an effortful, conscious, and analytic process. This liberal usage of ‘belief’ reflects the ambitions of the current project, which aims to identify what all such states have in common.

1 The Revisability View of Belief

In this section, I lay out my positive proposal, the revisability view of belief. On this view, in order for some mental state to count as a belief, it must have a certain capacity to be revised in response to conflicting evidence:

THE REVISABILITY VIEW OF BELIEF: if some mental state is a belief, then it is nomically capable of being rationally revised in response to any piece of available, sufficiently strong evidence that conflicts with it.

Put slightly more formally, the revisability view says: for all x such that x is a belief, and for all y such that y is some piece of available, sufficiently strong evidence that conflicts with x: x is nomically capable of being rationally revised in response to y.

The revisability view is a descriptive claim, not a normative claim. It is not a claim about how beliefs ought to be; it’s a claim about how beliefs must be insofar as they are beliefs. The revisability view says that if some state is not nomically capable of being rationally revised in response to any bit of available, sufficiently strong evidence that conflicts with it, it is not a belief, though it may be one of the other cognitive attitudes, such as an entertained thought, an assumption, or a pretense.5 Cognitive attitudes treat some state of affairs as obtaining, and in this way contrast with conative attitudes,  such as wishes and desires, which treat some state of affairs as to be obtained (Shah & Velleman, 2005; Velleman, 2000).

Before proceeding to the major components of the revisability view, there are three aspects of the view worth highlighting at the outset: first, the capacity to be revised in response to any piece of evidence does not entail a capacity to be revised simultaneously in response to all pieces of evidence. It may be that some belief can be revised in response to evidence that p and can be revised in response to some other piece of evidence that not-p even though that belief cannot be revised simultaneously in response to both pieces of evidence. This restriction is consonant with how we think of other capacities: that Janelle can swim a mile and can play the Rhapsody in Blue clarinet solo does not suggest that Janelle can swim a mile while playing the Rhapsody solo. Second, the relevant capacity to be revised is not to do with how a state is formed; it is strictly to do with how an existing state responds to evidence. Hence, states which are formed in response to good evidence but which subsequently lack the capacity to be revised are not revisable in the relevant sense.

Third, whether some mental state is revisable depends on whether that state can be revised in response to conflicting evidence. This raises the question of what the view says about mental states that are never contravened by evidence. Perhaps those mental states that represent obvious necessary truths, such as the judgment it’s not the case that p and not-p, are never contravened by evidence. Or, if God is all-knowing, then perhaps God’s mental states are never contravened by evidence.  If there are mental states that are never contravened by evidence, these states trivially satisfy the requirement of revisability in virtue of never failing to be revised in response to conflicting evidence. For this reason, the revisability view permits such states into the class of belief.

On the revisability view, beliefs must be (i) nomically capable of being (ii) rationally revised in response to (iii) available, conflicting evidence. I will discuss each of these components in turn. In the sense that is relevant to the revisability view, some mental state is nomically capable of being revised just in case, in at least some worlds where the relevant subject’s psychological mechanisms are held fixed, that mental state is revised.6 For mental states occurring in typical humans, the relevant worlds are those in which mechanisms of human psychology are held fixed. For mental states occurring in typical octopuses or machines or extra-terrestrials, the relevant worlds are those in which the typical mechanisms of octopus or machine or extra-terrestrial psychology are held fixed.7

Further, whether some mental state is revisable depends on whether that mental state can be revised in the very subject in whom it occurs. A mental state thus can’t count as capable of being revised in virtue of the fact that it would be revised if it were to occur in a different subject.8

While the revisability view requires that beliefs be capable of being revised in response to conflicting evidence, the view is silent about the nature of the processes that mediate this revision and relatedly, about whether subjects are aware of or voluntarily bring about this revision. It is thus consistent with the revisability view that belief revision should generally occur non-voluntarily, non-inferentially, or outside of the subject’s awareness.

The second component of the revisability view is that beliefs must be capable of being rationally revised in response to conflicting evidence. Some mental state is rationally revised in the relevant sense only if that mental state is revised: (i) in response to evidence, (ii) in the right direction, (iii) and via a non-deviant route. I will briefly sketch each of these conditions in turn.

First, for some mental state to be rationally revised, it must be revised in response to evidence. If you judge that today is Wednesday and then, as the result of an unfortunate encounter with lightning, lose this judgment, your judgment has been revised, but not in response to evidence. This route to revision is thus non-rational.

Second, for some mental state to be rationally revised, it must be revised in the right direction. Which direction is the right direction is dictated by the evidence. For instance, if the best evidence suggests that not-p, then for a mental state as of p to be rationally revised, it must decrease in strength, disappear altogether, or be suspended.9 If this state should increase in strength, it is not rationally revised.10

Finally, for some mental state to be rationally revised, it must be revised via a non-deviant route. The task of distinguishing deviant from non-deviant routes to revision is a very large one. Indeed, it is one of the primary projects of contemporary epistemology. For our purposes, it is sufficient to distinguish deviant from non-deviant routes ostensively, by a pair of contrastive cases.

First, consider a case in which you believe that extra-terrestrials do not exist. You then read in a reliable newspaper that the government has captured one. On the basis of the report, you relinquish your belief that extra-terrestrials do not exist. This is a non-deviant path to revision. Compare this case to one in which you read in the newspaper that the government has captured an extra-terrestrial, and the shock causes you to fall out of your chair and bang your head. By sheer coincidence, the blow obliterates your belief that extra-terrestrials do not exist.  In this case, your belief is revised in response to evidence, but the route to revision is deviant and hence, the revision does not count as a rational revision.

The final component of the revisability view is that beliefs must be capable of being revised in response to available, sufficiently strong conflicting evidence. If evidence is construed as states of affairs, then for those states to count as available for a subject, that subject must be aware of those states of affairs. Further, the mode or presentation under which the subject represents these states of affairs (if any) must be the same mode of presentation (if any) under which the relevant belief is described.

Here and throughout, the relevant notion of evidence is meant to be neutral between external and internal individuations of evidence. On some internalist conceptions of evidence, evidence necessarily consists of consciously accessible mental states. On such conceptions, the requirement that conflicting evidence be available is harmlessly redundant. On externalist conceptions of evidence, evidence is at least partly comprised of states of affairs (Kelly, 2008). On such conceptions, the requirement that evidence be available is a substantive requirement.11

Evidence is sufficiently strong in the relevant sense just in case it is strong enough to trigger a rational requirement that the relevant belief be revised. The motivation for characterizing sufficient strength in this way derives from the argument in favor of the revisability view.

Finally, conflicting evidence comes in two basic varieties. For a belief that p, conflicting evidence can be evidence in favor of some proposition q, where q is inconsistent with p.12 Or, it can be evidence that undermines p itself. For instance, suppose a subject believes, on the basis of a visual experience as of a bison, that there is a bison in the distance. Reliable testimony that there are no bison in the area would count as evidence that conflicts with this belief, since it is evidence in favor of a proposition that is inconsistent with the proposition believed. Evidence that one’s visual system is dramatically malfunctioning would also count as evidence that conflicts with the relevant belief, but for a different reason: it undermines the evidence which was the basis for that belief.

1.1 The Revisability View and Masks

The revisability view ascribes to beliefs a certain capacity, and capacities can be masked. For instance, if some glass is capable of breaking, then in at least some possible worlds, it does break. This is consistent with the fact that in many worlds, the glass won’t break, even if struck with force. Even a very fragile glass won’t break when struck if it is wrapped in soft, thick padding. But this does not mean that the padding renders the glass no longer capable of breaking. The padding merely obscures the glass’ capacity to break (Bird, 1998; Johnston, 1992).13

For at some beliefs in humans, the conditions which tend to facilitate rational belief revision are those in which the belief and the evidence which contravenes it are both attended, and the belief is not underpinned by strong emotion. Correspondingly, for such beliefs, typical masks of the capacity to be revised will include conditions in which the belief is not underpinned by strong emotion and the belief is not attended.

There are two reasons we should not take the preceding list of masks to exhaust possible masks of revisability. First, though these conditions describe some beliefs held by humans, it may be that there are other beliefs held by humans which are revised under rather different conditions than those described and hence, whose masking conditions are different than those described.

A second reason to take the list of masks as non-exhaustive is that the revisability view is not restricted to beliefs in humans; it extends to all beliefs, both actual and merely possible, whether those beliefs occur in non-human animals, artificially intelligent beings, or extra-terrestrials. It is at least conceptually possible that some of these creatures’ beliefs are masked by very different conditions than those which mask ordinary beliefs in humans. For instance, while being underpinned by strong emotion may prevent belief revision in humans, it is at least conceptually possible that there exist creatures for whom being underpinned by strong emotion is nomically necessary for belief revision.

1.2 The Revisability View and other Views of Belief

Since the revisability view posits merely a necessary and not a sufficient condition on belief, it is not a full characterization of belief. As it turns out, revisability is probably not sufficient for belief. One reason for thinking this is that it may be that for a mental state to count as a belief, that mental state must play some motivational or action-guiding role for its subject. On this view, if a subject believes tomatoes are vegetables, she must be disposed to say and do certain things, like assert that tomatoes are vegetables, place her tomatoes in the crisper drawer she reserves for vegetables, or increase her consumption of tomatoes as part of an attempt to increase her vegetable intake.

Another reason for thinking that revisability is not sufficient for belief is that it may be that all beliefs, insofar as they are beliefs, must play characteristic phenomenal roles. For instance, it may be that the belief that there is beer in the fridge must dispose one to experience surprise when opening the fridge and finding no beer in it (Schwitzgebel, 2002).

A final reason for thinking that revisability is not sufficient for belief is that it may be that in order for some mental state to count as a belief, that mental state must be inferentially promiscuous, or available as a premise across a wide range of inferences  (Glüer & Wikforss, 2013; Mandelbaum, 2014). On this view, if one believes there is beer in the fridge, then one must be able to exploit this belief as a premise in further inferences. For instance, the belief that there is beer in the fridge might satisfy this requirement by contributing to inferences such as: There is beer in the fridge. If there is beer in the fridge, I don’t need to buy more beer. So, I don’t need to buy more beer.

Notably, the revisability view is consistent with all of these proposed additional requirements on belief. For while the revisability view posits that revisability is necessary for belief, it does not further stipulate that this this is the only necessary condition on belief.

Finally, the revisability view should be distinguished from the view that belief aims at truth.14 These views come apart in both directions. One might hold that beliefs are necessarily capable of being revised in response to conflicting evidence and simultaneously deny that belief aims at truth. For instance, one might maintain that what accounts for the fact that beliefs are capable of being revised is not that beliefs aim at truth but that belief (or perhaps the organism as a whole) aims at some other outcome, such as internal consistency or holding views that are close enough to the truth for practicable action. Conversely, one might endorse the view that belief aims at truth while denying that all beliefs have a capacity to be revised in response to evidence. On this view, beliefs that entirely lack a capacity to be revised in response to evidence might be viewed as highly deviant beliefs, but they will nevertheless have truth as an aim; such states will merely be ill-equipped to achieve this aim.

In this section, I have sketched the major tenets of the revisability view of belief. On the revisability view, all beliefs, insofar as they are beliefs, are nomically capable of being rationally revised in response to evidence that conflicts with them. If there are mental states which are never contravened by evidence, such as those which represent evident necessary truths, these mental states trivially satisfy the condition of revisability and hence, can count as beliefs. States which altogether lack the nomic capacity to be revised are not beliefs, though they may be some other cognitive attitude, such as a merely entertained thought, an assumption, or a cognitive pretense. In the next section, I turn to the argument in favor of the revisability view.

2 The Argument from the Norm of Revision

In this section, I present a positive argument in favor of the revisability view, the argument from the norm of revision. This argument extends in full generality to all doxastic states, whether occurrent or dispositional, attended or unattended, unconsidered or reflectively endorsed, conscious or non-conscious, compartmentalized from other states or integrated with other states, heuristically-produced or inferentially-produced.

The argument is named after its central premise, which states, roughly, that beliefs are rationally required to be revised in response to any bit of available, sufficiently strong evidence that contravenes them. Since this claim is normative, it cannot not by itself illuminate the descriptive nature of belief. But combining this claim with an epistemic version of the principle ‘ought’ implies ‘can’ yields a surprisingly powerful argument in favor of the revisability view. Here is the argument in full. Throughout, m is an arbitrarily selected mental state m, and S is the subject in whom m occurs:

    1. If S’s belief m is contravened by available, sufficiently strong evidence, then S has a pro tanto obligation to rationally revise m in response to that evidence.
    2. If S is pro tanto obligated to revise m in response to available, sufficiently strong evidence that contravenes it, then S is nomically capable of rationally revising m in response to that evidence.
    3. If S is nomically capable of revising m in response to available, sufficiently strong evidence that contravenes it, then m is nomically capable of being rationally revised in response to that evidence.
    4. If S’s belief m is contravened by available, sufficiently strong evidence, then m is nomically capable of being revised in response to that evidence. i.e., the revisability view of belief is true.

The heavy lifters in the argument are the first two premises: (1) is the norm of revision, and (2) is an epistemic version of ‘ought’ implies ‘can. As we shall see, (3) is a truism or a near-truism. I turn now to defending the argument, focusing attention on the first two premises.

2.1 The Norm of Revision

The first premise of the argument just is the norm of revision. It says that subjects who enjoy beliefs that are contravened by available, sufficiently strong evidence have a pro tanto obligation to rationally revise that belief in response to that evidence. Here as before, a belief is rationally revised only if it shifts in the right direction in response to evidence and via a non-deviant route.

Pro tanto obligations are obligations that retain their force even when trumped by more pressing obligations (Scanlon, 1998, p. 50). In this way, they contrast with all-things-considered obligations. The following case is illustrative: you have promised to attend your friend’s viola performance. He is performing one solo at the beginning of a longer concert. On the way to the show, you encounter a badly injured child who needs medical attention. If you stop and help the child, you will certainly miss your friend’s performance. In this case, you are pro tanto morally required to keep your promise to your friend, at the same time that you have a more pressing moral requirement to ensure that the child receives medical care. Thus, your all-things-considered obligation is to assist the child, but this does not change the fact that you have a pro tanto requirement to attend your friend’s concert. That requirement is simply trumped by another more pressing obligation.

The distinction between pro tanto and all-things-considered requirements also obtains in the epistemic domain. The reason the norm of revision is articulated in terms of a pro tanto and not an all-things-considered requirement is that the subject whose belief is contravened by evidence may also have other evidence which supports a different all-things-considered epistemic obligation. For instance, if one believes, on the basis of a visual experience, that there is a bison in the distance, and one also has very good evidence that one’s visual system is malfunctioning and more particularly is causing visual bison hallucinations, one is at least pro tanto rationally required to revise one’s belief. However, if one also has evidence in the form of reliable testimony that there is in fact a bison in the distance, it may be that one’s all-things-considered epistemic obligation is to maintain one’s belief. Nevertheless, the pro tanto requirement to revise the belief does not disappear in light of the more pressing epistemic obligation; it is simply trumped such that it is not, all-things-considered, what one ought to do.

It remains to argue for the norm of revision. I am taking (1) to reflect an intuitive, core feature of belief but much more importantly, to at least partly explain why it matters whether some state is a belief or not. Thus to reject (1) would be to both dissociate belief from norms of rationality in a peculiar way and, what’s worse, to deprive the category of belief of its particular theoretical interest.

Consider that to reject (1) would be to accept that there exists some belief, held by some subject S, such that: S has available, sufficiently strong evidence that contravenes that belief, and yet S is not so much as pro tanto rationally required to revise that belief. Certainly, we can describe a case that satisfies these conditions, but that doesn’t suggest that such a description reflects a genuine and not merely an epistemic possibility. For any such proposed case, one might reasonably doubt whether such a state is a belief and not merely an entertained thought, a pretense, an assumption, or some other attitude altogether.

In favor of the norm of revision is that it captures our pre-theoretic intuitions about a range of cases. If you believe there is fruit on your kitchen table and then, walking into the kitchen, see that the fruit bowl is empty, you should revise your belief. If you believe your child did not shoplift from a local convenience store and subsequently view surveillance footage showing your child doing just that, you should revise your belief. If you believe God exists and subsequently come to believe that the suffering that exists in the world is inconsistent with the existence of God, you should revise your belief.

In all these cases, the relevant obligation is an obligation of rationality. In some of these cases, obligations of morality or of prudence may recommend different courses of action. For instance, it may be that morality requires that you believe your child when she says that she did not shoplift, even though the surveillance footage says otherwise. The presence of such an overriding moral obligation, however, would not make it the case that you lose the pro tanto rational obligation to revise your belief; the moral obligation merely trumps it.

Importantly, the norm of revision extends to beliefs that are unattended, formed on the basis of perception, or non-conscious. Insofar as these states are beliefs, they ought to be rationally revised in response to sufficiently strong, available evidence that contravenes them. If you believe that the Bowery runs from east to west, and then come to examine a map of Manhattan, paying special attention to the Lower East Side, you should revise your belief. If you judge, on the basis of a perceptual experience, that two lines in a figure you are viewing are of different lengths, and then come to learn that the figure is illusory, you should revise your belief. If you believe implicitly, as the undetected effect of watching too many commercials, that drinking fruit juice is healthy, and then come to read about the ill effects of fruit juice on insulin response, you should revise your belief.

A final observation about (1) before moving to (2): the connection to norms of rationality is distinctive of belief, in that there are at least many other attitudes which do not exhibit it. For instance, suppose that some subject merely entertains the thought that she is the ruler of Sweden, perhaps to amuse herself during a particularly dry philosophy talk. Suppose also that she has excellent evidence that she is not the ruler of Sweden. There is no rational requirement that this subject revise her entertained thought. There may be other reasons, such as prudential reasons, for her to abandon the entertained thought, but rationally speaking, there is nothing amiss about it.

It would appear then, that the fact that beliefs are susceptible to the norm of revision suggests something special about the nature of belief. The tantalizing hope is that we might exploit belief’s susceptibility to this norm to learn something substantial about belief’s nature, something that distinguishes it from at least some other attitudes. By combining (1) with an epistemic version of ‘ought’ implies ‘can’, we can do just that.

2.2 Epistemic ‘Ought’ Implies ‘Can’ 

I now turn to (2): if S is pro tanto rationally required to revise m in response to available, sufficiently strong contravening evidence, then S is nomically capable of rationally revising m in response to that evidence. (2) is an epistemic version of ‘ought’ implies ‘can.It says that if a subject ought to revise a belief, then she can revise that belief, in some sense of ‘can.’ As we shall see shortly, the relevant sense of ‘can’ is relatively weak, in that it does not require that the subject can voluntarily bring about the relevant revision.

The primary support for (2) is that it falls out of a more general claim about what agents can be rationally required to do, given their psychological limitations. In general, agents who are cognitively incapable of bringing about some state of affairs cannot be rationally required to bring about that state of affairs, even when their more cognitively capable counterparts might be so required. For instance, a typical two-month old human infant cannot be rationally required to discriminate between an inaccurate depiction of the human body and an accurate depiction of the human body; in certain circumstances, typical adults might be so obligated.15 Likewise, someone who reads English but not Mandarin cannot be rationally required to notice a glaring contradiction between a Mandarin text and its English translation; in certain circumstances, someone who reads both languages fluently might be so required.

The question arises: why aren’t infants rationally required to appreciate inaccuracies in depictions of the human body, when their adult counterparts might be so required? And why aren’t non-readers of Mandarin rationally required to make assessments on the basis of Mandarin-encoded information, when their Mandarin-fluent counter-parts might be so required? If we accept (2), and accept that rational requirements entail a corresponding psychological capacity, then we enjoy a straightforward and elegant answer to these questions: it is because rational requirements entail a correlative psychological capacity that infants can’t be rationally required to recognize inaccuracies in the depiction of the human body and that non-readers of Mandarin cannot be rationally required to appreciate a glaring problem in a Mandarin text. Psychological capacity serves as a limit on rational requirement. Anyone who rejects this view must offer some explanation of why we don’t hold infants to the same rational standards as adults and why we don’t rationally require adults to incorporate information they lack the capacity to understand.

In the contemporary literature, epistemic versions of ‘ought’ implies ‘can’ have been widely discussed, but it is important to note that these disputes are for the most part orthogonal to whether we ought to accept the ‘ought’ implies ‘can’ of (2).16 This is because the contemporary literature on epistemic versions of  ‘ought’ implies ‘can’ is primarily concerned with versions of the principle that entail a voluntary capacity to revise one’s beliefs. In contrast, (2) is neutral on whether the relevant subject can voluntarily bring about the relevant revision. Since our ultimate concern is not with the nature of the agent’s capacity to revise her belief, but is rather with the revisability of the belief itself, it is sufficient for our purposes that the relevant belief revision be appropriately causally related to the agent or some of the agent’s states, whether or not that causal relation is the product of voluntary action.17

Since the kind of capacity relevant to (2) does not require a capacity for voluntary action, the ‘ought’ implies ‘can’ of (2) survives counter-examples which threaten stronger versions of ‘ought’ implies ‘can.For instance, consider the following case from Sharon Ryan, designed to refute one such stronger variant of ‘ought’ implies ‘can’:

STICKY FINGERS

Your kleptomaniac friend, Sticky Fingers, has been accused of stealing your most prized possession. You are fond of Sticky Fingers and trust her completely. You believe she is innocent of the crime. After an investigation by the police, you are presented with conclusive evidence that Sticky Fingers committed the theft, but your belief in her innocence does not waver. You are simply psychologically incapable of voluntarily revising or relinquishing your belief in your friend’s innocence. Nevertheless, so far as rationality is concerned, you ought to give up this belief (Ryan, 2003, p. 59).18

It may be that STICKY FINGERS succeeds as a counter-example to versions of ‘ought’ implies ‘can’ that require that beliefs can be revised voluntarily. However, STICKY FINGERS does not rebut the version of ‘ought’ implies ‘can’ outlined in (2) since this version does not require a voluntary capacity for revision.

Moreover, if the case were modified in such a way that would render it a threat to (2), we would no longer enjoy the intuition that you are rationally required to revise your view of Sticky Finger’s innocence. Consider what such a case would involve: your view in your friend’s innocence would have to be such that, holding fixed your psychological mechanisms, there is no possible world in which your view in your friend’s innocence is rationally revised. Even if your warm feelings towards your friend were to disappear entirely, such that you no longer held her in any particular esteem, your view of her innocence would still not be revised in response to the evidence. Even if you were to spend years considering the evidence against your friend, your view would still not be revised.

It seems to me that in this case, in which your view of your friend’s innocence is genuinely nomically incapable of being rationally revised, it is obscure in what sense you might be rationally required to revise your view. Perhaps it would be rationally better if you were to revise your view, but some outcome can be rationally better without being rationally required. Consider an analogy from the moral domain: the world would be a better place if you were to leap a mile-long chasm to rescue an imperiled child. However, you are not morally required to bring about this event. You simply can’t leap a mile-long chasm, so you are not so obligated. Likewise, if your view of your friend’s innocence can’t be revised, you cannot be required to revise it.

2.3 The Argument Completed

The final premise in the argument is (3), which, to simplify somewhat, states: If S is capable of revising m, m is revisable. I am taking (3) to be a kind of truism. If someone is capable of driving that car, that car is capable of being driven. If someone can cook that eggplant, that eggplant is capable of being cooked. For any particular, if someone can perform some action on that particular, then that particular is capable of having that action performed on it. Token beliefs fall under this general schema. If some subject is capable of bringing about a revision of some particular belief, then that particular belief is capable of being revised.

Finally, (4) is the conclusion, and states that for any arbitrarily selected belief, that belief is nomically capable of being rationally revised in response to available, sufficiently strong evidence that contravenes it. In other words, the revisability view of belief is true.

What exactly does this argument show? As it turns out, it tells us something quite surprising and informative about the nature of belief. Not only are beliefs capable of being revised in some bare metaphysical sense—even states very different from belief, like desires, emotions, and pretense might exhibit a capacity as weak as that—all beliefs, insofar as they are beliefs, are such that in at least some nomically possible worlds where they are contravened by evidence, they are revised.

I take the argument developed in this section to suggest a strong reason for positing that beliefs are necessarily nomically capable of being rationally revised. In effect, the argument says that any opponent of the revisability view must pay a cost: on the one hand, she might give up on the norm of revision, which would be to dissociate belief from rational requirement in a surprising way and in a way that would deprive belief of its theoretical interest. On the other hand, she might give up the view that a rational requirement entails a nomic capacity to be revised. But this would deprive her of a straightforward explanation of why subjects who lack a nomic capacity to bring about some state of affairs cannot be rationally required to bring about that state of affairs.

In the next section, I consider what predictions the revisability view makes of particular cases. I argue that mental states that are sustained by confirmation bias can count as beliefs, but that faith-based religious views and similar states cannot. I further argue that the exclusion of faith-based religious views and similar states from the class of belief—though initially counter-intuitive—is ultimately a desirable result.

3 The Predictions of the Revisability View

Many beliefs held by actual humans are irrational. In developing a descriptive theory of belief, it is important to allow for this fact. If we don’t, we risk ending up with a theory of good belief, where what we wanted was a theory of belief (Huddleston, 2012). In this section, I show that the revisability view is not at risk of ending up a theory of good belief. It permits at least some irrational states into the class of belief, including at least some states which are the result of confirmation bias and at least some states which are emotionally underpinned.

3.1 States Permitted into the Class of Belief

The phenomenon of confirmation bias occurs when subjects ignore or disvalue evidence which conflicts with their existing views and attend to or overvalue evidence which supports their existing views. Confirmation bias is widespread in human reasoning; it has been observed in the context of paranormal beliefs, political beliefs, racist beliefs, and the pessimistic beliefs that are associated with certain anxiety disorders (Nickerson, 1998).

In a representative paradigm investigating confirmation bias, subjects were asked to indicate on a pre-defined scale the degree to which they were in favor of or against neuro-enhancement, or the non-palliative use of medicine for the purpose of improving cognitive, artistic, or athletic abilities. They were then given brief descriptions of eight different arguments, four of which were in favor of neuro-enhancement, and four of which were against it, and were asked to choose one of the eight arguments to read. Subjects who had self-rated as in favor of neuro-enhancement tended to select an argument in favor of neuro-enhancement. Subjects who had self-rated as against neuro-enhancement tended to select an argument against neuro-enhancement. Thus, subjects avoided evidence that might challenge their pre-existing views. After reading their chosen argument, the subjects exhibited very little shift in their views (Schwind et al., 2012).

Arguably, mental states which persist due to confirmation bias are irrational. At least, the strategy which sustains them is at odds with the widespread assumption in philosophy of science that the best way to test a theory is to try to falsify it. Nevertheless, the revisability view can count at least some mental states that are sustained by confirmation bias as beliefs. The revisability view says that all beliefs must be nomically capable of being revised in response to evidence. And, as it turns out, at least some mental states which are sustained by confirmation bias are capable of being so revised. For instance, in a variant of the study of subjects’ views of neuro-enhancement, subjects were encouraged to read an argument that was inconsistent with their reported view. This simple intervention resulted in subjects’ moderating their initial views of neuro-enhancement (Schwind, et al., 2012). This demonstrates that in at least some cases, confirmation bias can be remediated simply by drawing subjects’ attention to potentially disconfirming evidence; these states are thus nomically capable of being rationally revised.

Another class of beliefs which tend to resist evidence are emotionally underpinned beliefs. For instance, suppose that your child is accused of shoplifting cigarettes from a local convenience store. Suppose further that you enjoy evidence that your child in fact stole the cigarettes but that you nevertheless maintain her innocence. The question is: are you capable of rationally revising your view in response to the evidence? This depends on the particulars of the case.

There are practical and ethical problems in directly testing whether particular emotionally underpinned states would be revised if dissociated from emotion. Thus, it is difficult to say, of any particular emotionally supported state, whether that state is revisable. Nevertheless, there is indirect evidence that at least some such states—perhaps including your view that your child did not steal the cigarettes—would be revised if divorced from feeling or if contravened by very strong evidence.

A primary mechanism of belief revision in humans involves cognitive dissonance, which is a kind of discomfort triggered when a subject experiences her views as in conflict. When subjects experience dissonance, they tend to revise one of their conflicting views in the direction of coherence with the other view. The dissonance itself—the feeling of discomfort—plays an essential role in this process (Harmon-Jones & Harmon-Jones, 2007; Elliot & Devine, 1994). This suggests that the catalyst of belief revision is the motivation to reduce dissonance. When the challenged belief itself is such that, giving it up would cause emotional distress, it may be preferable for a subject to remain in dissonance and to suffer the minor discomfort associated with it, then to suffer the greater emotional cost of giving up a dearly held belief.

Thus, at least some emotionally underpinned states—perhaps including your view that your child is innocent—may have a masked capacity to be rationally revised. Where the negative feeling associated with cognitive dissonance is not strong enough to ‘unmask’ that capacity, those states will remain unrevised. If your view in your child’s innocence is such a state, then the revisability view can admit it into the class of belief. If, on the other hand, your view in your child’s innocence lacks altogether a nomic capacity to be rationally revised, perhaps because it is constitutively tied to positive feelings about your child, then the revisability view will exclude it from the class of belief.

3.2 States Excluded from the Class of Belief

So far, I have shown that the revisability view permits at least some irrational states into the class of belief, including at least some mental states that are sustained by confirmation bias and at least some emotionally underpinned states. Whatever else we might say about the revisability view, it does not commit the error of winding up a theory of good belief.

Nevertheless, the revisability view makes certain predictions that may seem counter-intuitive. For it excludes from the class of belief any mental state which is not capable of being rationally revised, even if that mental state: guides action, is sincerely endorsed by its subject, and serves as a premise in a wide range of inferences. There are two kinds of mental states which satisfy this description: the first kind is comprised of states which are neither formed in response to evidence nor subsequently enjoy a capacity to be revised in response to evidence. It may be that some faith-based religious views are like this. The second kind is comprised of states which are formed in response to good evidence, but which altogether lack a capacity to be subsequently revised. It’s unclear whether states of this second sort are common in humans, but they are at least conceptually possible; call such states idées fixes.

First, consider the subject who makes a Kierkegaardian leap of faith and accepts—in some sense of ‘accepts’—that God exists. Suppose that the resulting acceptance lacks any capacity to be revised in response to conflicting evidence. Suppose further that this acceptance plays a substantial role in motivating behavior—for instance, it explains why its subject engages in prayer, attends religious services, and the like—and that this acceptance also is sincerely and explicitly endorsed by its ubjects. Finally, suppose this acceptance is inferentially promiscuous, in that it is available as a premise in a wide range of inferences, such as: If God exists, we should love our neighbors. God exists. So, we should love our neighbors. All of these features would seem to suggest that this acceptance is a belief, its unrevisability notwithstanding. However, if this acceptance is genuinely nomically incapable of being rationally revised, the revisability view excludes it from the class of belief.

Next, consider a subject who forms the view that her neighborhood’s farmer’s market takes place on Fridays. This view is formed in response to excellent evidence. Some point after forming this view, this subject suffers a minor brain lesion which leaves her cognitive faculties entirely intact except for the curious result that she cannot revise her view about the local farmer’s market. She sees flyers advertising that the market has been rescheduled for Sundays, her friends repeatedly tell her the market is now on Sundays, she even visits the farmer’s market on Sundays (quite by accident, since she doesn’t anticipate its being held then), but she simply cannot revise her view that the market is held on Fridays. She shows up at the usual spot every Friday, bags and shopping list in tow. She tells anyone who asks that the market is on Fridays. She relies on this claim as a premise in a wide range of inferences, such as: The market is on Fridays. Today is Friday. The market is today. The lesion has transformed her prosaic belief into an idée fixe.19 On the revisability view, this mental state is not a belief, even though it was formed in response to good evidence. It may, however, be some other cognitive attitude, such as an entertained thought, an assumption, a cognitive pretense, or a non-doxastic delusion.

That the revisability view excludes faith-based religious acceptances and idées fixes from the class of belief might suggest that the view is too strong. After all, these states are action-guiding, sincerely endorsed by their subjects, and inferentially promiscuous. If they do not count as beliefs, we should like some explanation of why they do not. In the absence of such an explanation, it may seem that we should reject the revisability view, in favor of the following view:

THE ANTI-REVISABILITY VIEW OF BELIEF: At least some beliefs are not nomically capable of being revised in response to available, sufficiently strong evidence that conflicts with them.20

3.3 Sincere Assertion, Motivational Role, and Inferential Promiscuity

In this section, I wish to defuse the intuition that unrevisable faith-based religious acceptances and idées fixes are beliefs. I do this by first, suggesting that the source of this intuition is that these states have many of the typical traits of belief: they guide action, they are sincerely endorsed by their subjects, and they exhibit inferential promiscuity. Second, I argue that none of these traits is sufficient for belief and hence, the fact that faith-based religious acceptances and idées fixes exhibit these traits should not be taken to entail that these states are beliefs. Thus, I aim to defuse the intuition that these states are beliefs by undermining the model of belief that undergirds the intuition.

First, consider motivational role. It may be thought that if a mental state governs action in the right way—for instance, disposes one to attend church, pray, and attempt proselytization of others—that this is sufficient for its being a belief. However, as many theorists have by now pointed out, it is not only belief which can motivate action. Arguably, pretenses and suppositions also guide action (Gendler, 2007; Gendler, 2008; Velleman, 2000). For instance, suppose you are pretending to be an elephant. You might wave your trunk and walk clumsily and slowly. You don’t (let’s stipulate) believe you are an elephant. Plausibly, your pretense that you are an elephant itself motivates these actions (Velleman, 2000). It may be that there are certain kinds of motivational roles that only belief can play, but it is at least not obvious what these would be.

Second, consider sincere assertion. It may be that as a general rule, if some subject sincerely asserts p, that subject believes p. But there are cases in which sincere assertion that p occurs even when the subject does not believe p. The reason for this is quite simple: subjects don’t always know what they believe and can also have false beliefs about what they believe. For instance, consider a case of self-deception in which a subject believes that his husband is cheating on him but cannot admit this to himself. Such a subject might sincerely assert that his husband is faithful, while nevertheless exhibiting behavior that is consistent with his belief, such as feeling sad when his husband calls yet again to say that he will be working late, and asking his husband more questions than usual about his whereabouts. If this sort of case is so much as possible, sincere assertion that p does not entail belief that p.21

Finally, consider inferential promiscuity, which is availability as a premise in a wide range of inferences. Mental states that are inferentially promiscuous have a kind of productive power; they can inferential-causally generate new mental states. Inferential promiscuity may be a necessary condition on belief, but it is not sufficient for belief. Consider that attitudes other than belief, such as supposition, also exhibit inferential promiscuity. For instance, suppositions can act as premises in arguments by reductio.22

Consider the geometry student tasked with proving that no triangle has four sides. To do this, this student might suppose, for the sake of proving otherwise, that there is a triangle that has four sides and then attempt to generate a contradiction, as part of a demonstration that the original supposition is false. We might say that this student ‘hypothetically adds to her stock’ of beliefs that there is a triangle that has four sides, but whatever it is to hypothetically add to one’s stock of beliefs that there is a triangle that has four sides, it is not to believe that there is a triangle that has four sides. It is to suppose it or merely entertain it.23

Given these considerations, we should be at least doubtful whether motivational role, sincere assertion, or inferential promiscuity are sufficient for belief. So, we should be at least doubtful whether faith-based religious views and idées fixes are beliefs. Thus, that the revisability view excludes such states from the class of belief does not suggest a reason to reject the revisability view.

Moreover, the argument in favor of the revisability view is simultaneously a reason for excluding faith-based religious views and idées fixes from the class of belief: because the norm of revision and ‘ought’ implies ‘can’ cannot be simultaneously true of these states, we should not class them as beliefs.

Consider again the subject who suffers from an unrevisable idée fixe that the farmer’s market is on Fridays. Notice that the following two claims—which are instances of the first two premises of the argument from the norm of revision—cannot be simultaneously true of this subject:

(1*) If S’s mental state that represents ‘the farmer’s market is on Fridays’ is a belief, then S has a pro tanto requirement of rationality to revise that mental state.

(2*) If S has a pro tanto requirement of rationality to revise her mental state that represents ‘the farmer’s market is on Fridays,’ then S is nomically capable revising that mental state.

The subject’s idée fixe cannot simultaneously satisfy (1) and (2). But as the discussion in §2 demonstrated, we have independent grounds for accepting the broader principles of which (1) and (2) are merely instances. If the subject’s idée fixe really is a belief, it is subject to the norm of revision and hence, should satisfy (1). But if it is subject to the norm of revision, it should enjoy a correlative nomic capacity to be revised and hence, should satisfy (2). That the idée fixe cannot meet both of these requirements counts as a positive reason to exclude it from the class of belief.

In short, because idées fixes exhibit belief-like traits—such as inferential promiscuity, motivational role, and a connection to sincere assertion—we mistook them for beliefs, because many mental states with those features are beliefs. But closer reflection reveals idées fixes not to be beliefs, but rather to be pretenses, assumptions, or non-doxastic delusions. The same points apply mutatis mutandis to faith-based religious acceptances.

4 Conclusion

I have developed and defended the view that all beliefs are necessarily nomically capable of being rationally revised in response to available, sufficiently strong contravening evidence. I have argued that this view is weak enough to accommodate beliefs that are for contingent reasons unresponsive to evidence. At the same time, the view is strong enough to accommodate belief’s susceptibility to the norm of revision. Views which reject a connection between belief and a nomic capacity to be revised struggle to accommodate belief’s susceptibility to this norm.

The revisability view has the surprising result that some faith-based religious acceptances and idées fixes are not beliefs. Though initially counter-intuitive, this result should not be taken as a reason to reject the revisability view, as the presumption that faith-based religious acceptances and idées fixes are beliefs is rooted in an incorrect view about which conditions suffice for belief.

 

References

Adler, J. 2002. Belief’s Own Ethics. Cambridge, MA: MIT Press.

Baillie, J. 2013. ‘The Expectation of Nothingness.’ Philosophical Studies, 166(1): 185-203.

Bayne, T., & Pacherie, E. 2005. ‘In Defence of the Doxastic Conception of Delusions.’ Mind & Language, 20(2): 163-188.

Bortolotti, L. 2010. Delusions and Other Irrational Beliefs. Oxford: Oxford University Press.

Bykvist, K. and Hattiangadi, A. 2007. ‘Does Thought Imply Ought?’ Analysis 67: 277 – 285.

Cassam, Q. 2010. ‘Judging, Believing, and Thinking.’ Philosophical Issues (20)1: 80-95.

Chrisman, M. 2008. ‘Ought to Believe.’ Journal of Philosophy 105: 346 – 370.

Cohen, L. J. 1992. An Essay on Belief and Acceptance. Oxford: Clarendon Press.

Currie, G. and Ravenscroft, I. 2002: Recreative Minds. Oxford: Oxford University Press.

Davidson, D. 1984. Inquiries into Truth and Interpretation. Clarendon: Oxford University Press.

Davies, M., & Coltheart, M. 2000. ‘Introduction: Pathologies of Belief.’ Mind & Language 15(1): 1-46.

Dennett, D. C. 1989. The Intentional Stance. Cambridge, MA: MIT Press.

Doggett, T., & Egan, A. 2007. ‘Wanting Things You Don’t Want.’ Philosophers’ Imprint 7(9): 1-17.

Döring, F. 1990. ‘Limits of Irrationality,’ Philosophy and Psychopathology 86-101.

Egan, A. 2009. ‘Imagination, Delusion, and Self-deception’ in Delusion and Self-deception: Affective and Motivational Influences on Belief Formation, eds. T. Bayne and J. Fernandez. New York: Psychology Press.

Elliot, A. and Devine, P. 1994. ‘On The Motivational Nature of Cognitive Dissonance: Dissonance as Psychological Discomfort,’ Journal of Personality and Social Psychology 67: 382-394.

Feldman, R. 2000. ‘The Ethics of Belief,’ Philosophy and Phenomenological Research 60: 667-695.

Frankish, K. 2012. ‘Delusions, Levels of belief, and Non-doxastic Acceptances,’ Neuroethics 5(1): 23-27.

Friedman, J. 2013. ‘Suspended Judgment.’ Philosophical Studies 162(2): 165-181.

Gawronski, B. 2012. Back to the Future of Dissonance Theory: Cognitive Consistency as a Core Motive.’ Social Cognition 30(6), 652-668.

Gendler, T. S. 2007. ‘Self-deception as Pretense,’ Philosophical Perspectives 21(1): 231-258.

Gendler, T. S. 2008. ‘Alief and Belief,’ The Journal of Philosophy 634-663.

Glüer, K. and Wikforss, Å. 2013. ‘Aiming at Truth: On the Role of Belief.’ Teorema 32(3): 137-162.

Graham, P. A. 2011. ‘‘Ought’ and Ability.’ The Philosophical Review 120: 337- 382.

Harmon-Jones, E. 2000. ‘A Cognitive Dissonance Theory Perspective on the Role of Emotion in the Maintenance and Change of Beliefs and Attitudes.’ Emotions and Beliefs 185-211.

Harmon-Jones, E. and Harmon-Jones, C. 2007. ‘Cognitive Dissonance Theory After 50 Years of Development.’

Hattiangadi, A. 2010. ‘The Love of Truth.’ Studies in History and Philosophy of Science 41: 422 – 432.

Haug, M. 2011. ‘Explaining the Placebo Effect: Aliefs, Beliefs, and Conditioning.’ Philosophical Psychology 24(5): 679-698.

Hazlett, A. 2013. A Luxury of the Understanding: on the Value of True Belief. Oxford: Oxford University Press.

Hieronymi, P. 2006. ‘Controlling Attitudes.’ Pacific Philosophical Quarterly 87(1): 45-74.

Huddleston, A. 2012. ‘Naughty Beliefs.’ Philosophical Studies 160:

209-22.

Kelly, T. 2008. ‘Evidence: Fundamental Concepts and the Phenomenal

Conception,’ Philosophy Compass 3(5): 933-955.

Lehrer, K. 1976. ‘‘Can’ in Theory and Practice: A Possible World Analysis.’ in Action Theory, eds. Brand and Walton. Dordrecht: D. Reidel.

Leon, M. 1992. ‘Rationalising Belief.’ Philosophical Papers, 21(3): 299-314.

Lipton, P. 2007. ‘Accepting Contradictions’ Monton, B. (Ed.). Images of Empiricism: Essays on Science and Stances. Oxford: Oxford University Press.

Littlejohn, C. 2012. ‘Does ‘Ought’ Still Imply ‘Can’?’ Philosophia 40(4): 821-828.

Mandelbaum, E. 2014. ‘Thinking is Believing.’ Inquiry 57(1): 55-96.

McKay, R. T., & Dennett, D. C. 2009. ‘The Evolution of Misbelief.’ Behavioral and Brain Sciences 32(06): 493-510.

Mizrahi, M. 2012. ‘Does ‘Ought’ Imply ‘Can’ From an Epistemic Point of View?’ Philosophia 40: 829-840.

Nickerson, R. S. 1998. ‘Confirmation Bias: A Ubiquitous Phenomenon in Many Guises.’ Review of General Psychology 2(2): 175.

Ryan, S. 2003. ‘Doxastic Compatibilism and the Ethics of Belief.’ Philosophical Studies 114(1): 47-79.

Schwind, C., Buder, J., Cress, U., & Hesse, F. W. 2012. ‘Preference-inconsistent Recommendations: An Effective Approach for Reducing Confirmation Bias and Stimulating Divergent Thinking?’ Computers & Education 58(2): 787-796.

Schwitzgebel, E. 2001. ‘In‐between Believing.’ The Philosophical Quarterly 51(202): 76-82.

Schwitzgebel, E. 2002. ‘A Dispositional, Phenomenal Account of Belief.’ Noûs 36(2): 249-275.

Schwitzgebel, E. 2012. ‘Mad Belief?’ Neuroethics 5(1): 13-17.

Shah, N., & Velleman, J. D. 2005. ‘Doxastic Deliberation.’ The Philosophical Review 497-534.

Slaughter, V., Heron-Delaney, M., Christie, T., Slaughter, V., & Brownell, C. 2011. ‘Developing Expertise in Human Body Perception.’ Early Development of Body Representations, eds. Slaughter & Brownell. Cambridge: Cambridge University Press.

Steup, M. 2008. ‘Doxastic Freedom.’ Synthese 161(3): 375-392.

Stich, S. P. 1978. ‘Beliefs and Subdoxastic States.’ Philosophy of Science 499-518.

Strevens, M. 2012. ‘Notes on Bayesian Confirmation Theory.’ unpublished manuscript.

van Leeuwen, D. N. 2009 ‘The Motivational Role of Belief.’ Philosophical Papers 38(2): 219-246.

Velleman, D. 2000. The Possibility of Practical Reason. Oxford: Oxford University Press.

Vranas, P. 2007. ‘I Ought, Therefore I Can.’ Philosophical Studies 136: 167 – 216.

Way, J. 2010 ‘The Normativity of Rationality.’ Philosophy Compass 5(12): 1057-1068.

Wedgwood, R. 2002. ‘The Aim of Belief.’ Noûs, 36(s16), 267-297.

Whiting, D. 2010. ‘Should I Believe the Truth?’ Dialectica 64: 213-224.

 

 


Notes

  1. Also, the interpretative view of mind of the kind associated with Davidson (1984) and Dennett (1989) entails that beliefs are necessarily evidence-responsive in some way, though it is unclear exactly how strong this evidence-responsiveness must be. For a discussion, see Döring (1990).
  2. A notable exception is found in Shah and Velleman (2005), who suggest that in order to distinguish beliefs from other cognitive attitudes, we must posit that beliefs are necessarily evidence-responsive. For a discussion of this argument, see van Leeuwen (2009).
  3. For evidence that beliefs are sometimes formed in response to poor or no evidence, see Mandelbaum (2014). For evidence that beliefs are sometimes maintained in the face of conflicting evidence, see Nickerson (1998).
  4. This general approach owes much to Mandelbaum (2014).
  5. Throughout, all references to revision should be understood to be references to rational revision, unless stated otherwise.
  6. This method of analyzing capability is loosely based on Lehrer (1976).
  7. Arguably, subjects can survive changes in the kind of creature they are. There is an issue of how to characterize revisability in light of this fact. It’s at least conceivable that a goldfish might be turned into an orangutan and manage to persist as the very same creature that it was before its transformation (it won’t be the same species, but it might be the very same creature). If this possibility is a genuine one, then we need to further restrict the range of worlds that are relevant to revisability. Otherwise, a mental state held by a goldfish might count as revisable in virtue of the fact that it would be revised, were that goldfish transformed into an orangutan. This result would make the revisability view too weak to be of much interest. We can guard against this outcome by restricting the range of relevant possible worlds to those in which the relevant subject persists as the very same kind of creature that she is in the actual world.
  8. On some views of the persistence conditions of mental states, it is not possible for the very same mental token to occur in two different subjects, even at different times. On such views, the proposed limitation on revisability will be harmlessly redundant.
  9. I mention belief suspension separately since it might constitute a sui generis kind (Friedman, 2013).
  10. One way a mental state can count as revisable is by decreasing in strength in response to conflicting evidence. In order for some mental state to decrease in strength without disappearing altogether, it must permit of degrees of strength. Not all revisable states will permit of degrees but we might think of those that do as permitting of degrees that can range from 0 up to and including 1 on the real numbers. Where these mental states are beliefs, these degrees are called credences and might be helpfully construed as a measure of subjective confidence (Strevens, 2012). I am supposing that there cannot be states which represent to degree 0 that p. When a belief actually reaches (as opposed to merely approaches) a credence of 0, it ceases to exist.
  11. Going forward, I sometimes abbreviate ‘sufficiently strong, available conflicting evidence’ with ‘sufficiently strong conflicting evidence’ or just ‘conflicting evidence.’
  12. Two mental states are inconsistent in the relevant sense just in case the propositions they each represent cannot simultaneously obtain. The relevant kind of conflict is thus that of logical conflict. Pairs of mental states whose contents are transparently contradictions of each other (‘Austria is in the E.U.’ and ‘Austria is not in the E.U.’) will count as inconsistent, as will pairs of mental states whose contents conflict in a less transparent way (‘Clark Kent cannot fly’ and ‘Superman can fly’).
  13. This characterization of masks borrows from the dispositions literature, which is the context in which Bird and Johnston discuss masks.
  14. For defenses of the view that belief has truth as an aim, see Shah and Velleman (2005) and Wedgwood (2002). For a defense of the view that belief’s aim (if any) is knowledge, see McHugh (2011).
  15. Two-month-old infants cannot discriminate accurate from ‘scrambled’ depictions of the human body (Slaughter et al., 2011, pp. 87-91).
  16. See Mizrahi (2012) and Ryan (2003) for recent criticisms of epistemic ‘ought’ implies ‘can.’ For recent defenses, see Hattiangadi (2010), Littlejohn (2012), and Vranas (2010).
  17. For recent criticisms of belief’s voluntariness, see Hieronymi (2006) and Ryan (2003). For a recent defense, see Steup (2008).
  18. This case is slightly modified from Ryan’s so that it concerns already existing beliefs. Also, though Ryan’s initial description of the case does not explicitly state that it is a voluntary capacity to revise one’s beliefs that is relevant, the subsequent discussion of the passage makes explicit that it is a voluntary capacity that is at stake.
  19. I am presuming a certain view of the persistence of mental states, on which a particular mental state can survive changes in the kind of attitude it is. This is not an essential part of the story, though. It could be that the subject’s belief was destroyed and replaced by a non-doxastic delusion with the same content.
  20. Something like the anti-revisability view is accepted by Gertler (2011), Huddleston (2012), Mandelbam (2014), Bayne and Pacherie (2005: 183), and Bortolotti (2011: 124).
  21. Cohen (1992, pp. 68-73) also argues that sincere assertion does not entail belief, as does Mandelbaum (2014, pp. 79-81).
  22. Strictly speaking, mental states do not serve as premises in arguments. Arguments are sets of propositions structured by a relation of putative validity. As such, they are abstract, mind-independent entities—certainly not the sorts of things that contain mental states. Saying that a mental state ‘serves as a premise’ in an argument is a loose way of saying that a mental state figures in a psychological inference. Psychological inferences are sets of mental states structured by a relation of putative validity; they are the mechanisms by which subjects grasp arguments.
  23. Certain kinds of counterfactual reasoning provide more evidence of the inferential promiscuity of suppositions. Consider a case in which a consumer researcher asks you fill out a survey about food choices, which includes the following question: Suppose you are having pizza for dinner. What would you order with it? Beer or wine? One way to answer this question would be to simply comply with the task instructions, and to suppose that you are having pizza for dinner and then—by using inductive evidence from ‘seeing’ what your choice would be in that counterfactual situation—generate a hypothesis about what you would to in the relevant counterfactual. But of course, you don’t believe you are having pizza for dinner. You merely entertain the thought or suppose it, and this supposition plays an essential inferential role in the generation of your hypothesis.

35 thoughts on “The Revisability View of Belief”

  1. Grace Helton’s admirably clear, resourceful paper presents a crisp argument for the revisability view of belief (RVB).

    1. If S’s belief that m is contravened by available, sufficiently strong evidence, then S has a pro tanto obligation to revise m in response to that evidence.
    2. ‘Ought’ implies ‘can’: If S has an obligation to revise her belief, then it must be nomically possible for her to do so.
    3. Therefore, if S can’t revise her propositional attitude in the face of contrary evidence (i.e., it’s nomically impossible for her to do so), then that propositional attitude isn’t a belief.

    (3) is the RVB. It places a normative constraint on beliefs. Its slogan might be: No belief without rational revisability.

    Helton has proposed a really ingenious argument. The ‘ought’ implies ‘can’ principle is typically used in its contrapositive form: If you don’t have the ability to X, then you’re not obligated to X. Philosophers use it to go from a descriptive claim (or fact) to a normative one. But Helton uses the principle to take us from a normative claim (about our epistemic obligations) to a descriptive one (about the nature of belief).

    Let’s unpack RVB in more detail. Suppose S’s propositional attitude that m has many properties typical of belief (S is disposed to sincerely assert it, the propositional attitude is inferentially promiscuous, and it plays a belief-like causal role in S’s behavior). Is it a belief? If, when confronted with evidence that rationally requires S revise his attitude to m, S is incapable of revising it, it’s not a belief. This is the revisability requirement. There are two important features of this requirement that need explanation.

    The capacity restriction: S might fail to revise his attitude toward m in the face of contrary evidence, even though he is capable of it, because some interfering factor “masks” his capacity. Since many of our beliefs can be masked in this way, it would be deeply counterintuitive for RVB to bar all maskable propositional attitudes from being beliefs. And so the revisability requirement demands only that S be capable of revising m in the face of contrary evidence in the following sense: Keeping fixed S’s psychological mechanisms, as long as there is some nomically possible world in which S revises m in the face of that contrary evidence, it’s not barred from being a belief.

    The actual evidence restriction: We must only consider nomically possible worlds in which S has all and only the evidence actually available to S. So if in the actual world, S never runs into evidence that conflicts with m, Helton argues that the revisability requirement is trivially satisfied: The propositional attitude never fails to be revised in response to conflicting evidence. This doesn’t mean it’s a belief. It just means that RVB doesn’t bar it from being a belief.

    1. The accidental belief problem

    Consider Helton’s Farmer’s Market example: A reasoner has the view that the farmer’s market takes place on Fridays, but a minor brain lesion has left her unable to revise this propositional attitude. S1 is faced with quite a lot of powerful evidence against her view that the farmer’s market is on Fridays. Even though S1’s propositional attitude has many belief-like properties (sincere assertion, inferential promiscuity, causal role in behavior), it is not a belief because it violates the revisability requirement.

    Now consider a variation on this example: S2 is just like S1 except that, by pure happenstance, S2 is never faced with contrary evidence. She never sees the farmer’s market flyers, her friends never tell her about the farmer’s market being on Sunday, she never accidentally stumbles upon the market on a Sunday, and she never decides to go to the farmer’s market on a Friday. Given the actual evidence restriction, RVB doesn’t bar S2’s propositional attitude from being a belief. And given its other belief-like qualities, there doesn’t seem to be any other way to bar it from being a belief. RVB implies that S2 has an accidental belief – she believes that m because by pure accident she happened not to confront evidence against it.

    The existence of accidental beliefs is a problem. Suppose S1 first gets contrary evidence at time t1. Prior to t1, at t0, suppose that S1 and S2 are physically identical twins and living in the actual world. So on any plausible way of individuating content, the contents of their propositional attitudes are identical. Does S1 at t0 believe that the farmer’s market is on Friday? It’s not clear how to answer this question without getting into trouble. If the answer is yes, it seems strange that S1’s propositional attitude would be a belief at t0 and then stop being a belief at t1. If the answer is no, then at t0, S2 has a belief S1 doesn’t have even though they’re physically identical and the contents of their propositional attitudes are identical.

    Helton understands beliefs to be “entities we must posit to explain some interesting range of empirical phenomena.” From an explanatory or predictive perspective, there seems to be no reason to categorize the propositional attitudes of S1 and S2 in different ways.

    2. The conjunction problem

    Sometimes my views are in a sorry state. Right now, for example, I have obligations to revise 18 of my views. I have the psychological wherewithal to revise them one-by-one. And while I am able to entertain the conjunction of all 18 propositions, I don’t have the psychological wherewithal to revise them all at once: given nomological facts about human computational capacity, I can’t revise the conjunction of P1-P18. Apply the revisability requirement to each of my 18 individual propositional attitudes, and they count as beliefs. But apply it to the propositional attitude whose content is the conjunction of those 18 propositions, and it’s not a belief. This is a result to be avoided.

    The obvious fix is to add a recursive clause to RVB, something along the lines of: If S believes B1 and B2, and B1 and B2 logically imply B3, and S recognizes that B1 and B2 logically imply B3, and S is disposed to accept B3 as true, then S believes B3 as well. Let’s suppose that this clause, or something like it, solves the conjunction problem. I think RVB together with the recursive clause is still going to have problems.

    Recall the reasoner from the original Farmer’s Market example. Suppose she holds B1 and B2, and she can revise them. So they’re beliefs. And she recognizes that together they imply that the farmer’s market is on Fridays. But because of the brain lesion, she cannot revise her view that the farmer’s market is on Fridays. RVB plus the recursive clause would yield the result that she believes that the farmer’s market is on Fridays despite the fact that she is nomically incapable of revising her view. This result seems contrary to the spirit of RVB: No belief without rational revisability.

    3. The boundary problem

    Consider two forms of the ‘ought’ implies ‘can’ principle.

    Weak: If I’m obligated to X, then it is nomically possible for me to X.
    Strong: If I’m obligated to X, then it is practically possible for me to X.

    Helton embraces the weak principle. But it’s not clear why we should prefer it to the strong principle. Consider the following case.

    Sportswriter: S is a sportswriter who holds that m, where m is a mathematical claim. m isn’t important to S. Were she to discover that it’s false, she’d give it a thought, but not a second thought. S has evidence E, and E rationally compels the revision of m. But S doesn’t realize that E rationally compels the revision of m. To get into a position to realize that E rationally compels her to revise m, S would have to expend some effort. How much effort? Consider some possibilities:

    Superhuman: the laws of nature dictate that if S were to expend all her resources evaluating m, she would fall just barely short of rationally revising m (because she’d die and wouldn’t have the time, or she’d lack the energy or computational power).

    Almost-Superhuman: the laws of nature dictate that if S were to expend all her resources evaluating m, she would just barely have enough resources to rationally revise m.

    Extraordinarily Difficult: S could rationally revise m if she were to abandon every person and project that gives shape and meaning to her life in order to spend a decade evaluating m.

    Really Tough: S could rationally revise m if she were to spend scores of hours evaluating m.

    Tough: S could rationally revise m if she were to spend 3 hours evaluating m.

    Simple: S could rationally revise m if she were to spend 20 seconds evaluating m.

    Any plausible ‘ought’ implies ‘can’ principle will have to identify a boundary between obligations and non-obligations somewhere between Simple and Superhuman. Some might object to the ‘ought’ implies ‘can’ principle on the grounds that there’s no way to draw a principled, non-arbitrary boundary of this sort. But that’s not my worry.

    The objection I want to press is this: Any clear boundary we draw between our obligations and our non-obligations is going to cause problems for RVB. Suppose we follow Helton and draw the boundary between Superhuman and Almost-Superhuman. So in Superhuman S’s view that m is not a belief, but in Almost-Superhuman and all the rest of the cases it is a belief. Now consider the following version of Sportswriter:

    Time: A year ago, when S was 33, the laws of nature dictated that she did have enough time and energy to rationally revise m. But she has ignored m over the past year. Today, the laws of nature dictate that she doesn’t have enough time and energy to rationally revise m.

    RVB implies that S’s view that m was a belief last year, but it’s not a belief today. This seems counterintuitive. Note that it doesn’t matter exactly where one draws the
    obligation / non-obligation boundary. As long as the boundary is a clear one, RVB will suffer from a boundary problem.

    If this is right, perhaps the conclusion to draw is that the boundary between our obligations and our non-obligations is fuzzy. And so perhaps an interesting upshot of RVB is that a propositional attitude’s being a belief is not always an all-or-nothing affair.

  2. I’d like to thank Grace Helton for her extremely thought-provoking paper. Overall, I very much agree with her view and have advocated something quite similar myself as a part of a larger project (“Religious credence is not factual belief,” 2014). Whereas I use the notion evidential vulnerability in that piece, Grace appeals to revisability, but we are after at least close to the same notion. I advocate including evidential vulnerability in the definition of factual belief; Grace advocates including revisability as a necessary condition on beliefs. But the mental states she wishes to include in, and exclude from, the category of beliefs seems to me to map to the mental states I wish to include in, and exclude from, the category of factual belief, so we may just have notational variants for the same set of mental states that we find interesting.

    Here I’d like to offer two additional pieces of agreement and corresponding elaboration, and then I’d like to raise a big question about her overall argumentative strategy. This big question will end up suggesting I think she should try getting to the views that we share via a very different route.

    The first piece of agreement is with the idea that there can be mental representations that have the forward effects of beliefs (by and large) but still, lacking revisability, ought not to be counted as beliefs. I’m not sure if I can point to anything that’s both emprically supported and fits this description exactly, but there are a could of candidates worth considering.

    Grace divides mental states that can’t be revised in response to evidence into two categories: (1) those that were formed initially without any influence by evidence and (2) those that were formed initially with influence from evidence, she calls idées fixes.

    A candidate example of (1) would be the representations in our various folk systems, like folk biology, folk physics, and folk psychology. Take folk physics. If cognitive scientists like McCloskey (1983) are to be trusted, normal humans have an implicit theory of the physical world which resembles medieval impetus theories of physics. This implicit theory, or something like it, is probably put in place by evolution. And evidence suggests that even trained physics teachers, who most certainly do not believe the theory, fall back on it when cognitive load is high—they are used in guiding actions and even in making predictions about how objects will behave. They may well even experience frustration that they can’t revise the intuitive representations of their folk physics, precisely because they don’t believe their contents.

    An example of (2) [an idée fixe] might be the representations of persistent danger that persons with PTSD seem to have consequent on their traumatic incident. True, the tramatic incident was evidence for the truth of the contents of the representation of danger at the time. But even in an environment in which there is ample and even cognized evidence that the danger no longer is there, the representations persist. Furthermore, they may guide action, be incorporated into inferences, and even get verbalized sincerely (even if disavowed on other occasions).

    So this is the first point of agreement. It seems like there are states that have the forward effects of beliefs without being beliefs due to their lack of revisability. The agent basically says to herself, “Damnit, why can’t I get rid of this thing that I don’t even believe!”—and feels frustrated that more exposure to evidence doesn’t do the trick. The one question I’d raise for Grace on this issue is the following: if we individuate the forward effects of beliefs more precisely, would the kinds of states I’ve mentioned in relation to (1) and (2) actually turn out to have the same functional roles as ordinary regular beliefs? We may, of course, say crudely that they guide behavior and inference in the manner of belief. But perhaps they show forms of context-dependence that ordinary beliefs lack, or perhaps their manner of guidance is somewhat different. If this is so, then it may be that in reality the downstream functional features of beliefs properly individuated cluster together with revisability. I will remain agnostic on this here. But if it is the case, then it is a further interesting project to ask why it should be so.

    The second point of agreement is with the idea that it may make sense to exclude faith-based “beliefs” from the class of beliefs of interest. Now, the point is really that, among those mental states that pre-theoretically get called “beliefs,” there is actually a deep and interesting difference in psychological kinds. I am fond of the following sort of example: It’s September of 1977, and Sam and Terry both “believe” that Elvis is alive. Sam is not much of a news reader and so just didn’t see the headlines the month before. She takes Elvis to be alive in a matter-of-fact sort of way and is planning to look up his concert schedule later that day. Terry, on the other hand, heard the news, but she “rejected it” and decided to “believe” Elvis was still alive, setting up a reverential shrine to worship him and everything.

    Now, are Sam’s and Terry’s “beliefs” to be regarded as the same sort of cognitive attitude from the standpoint of a more mature psychological theory? I am inclined to say that they aren’t. I call Sam’s a factual belief (Grace would just call it a belief) and call Terry’s a religious credences (I’m not sure what Grace would call it). There are a variety of differences between these types, but important among them is the fact that Sam’s belief would vanish as soon as she saw the relevant newspaper, whereas Terry is in a state of mind in which evidence is just not something that compels revision—more evidences might even make her mental state more entrenched. She is, in fact, in a situation that has compelling evidence for the falsity of her “belief” but in which she clings to the “belief” nevertheless.

    The deeper point here is not that Sam’s so-called “belief” is not really a belief—that is a terminological issue that I won’t go into.1 Rather, there is a distinction to be drawn that is highly relevant both for epistemology and psychological theory.


    Now for a question of method. Granting that I more or less agree with Grace’s conclusion, I have to confess that I’m rather unconvinced by her method of arguing for it.

    Should we argue from norms to descriptive facts about the world? Or should we rather say that, if the norms make presuppositions about how the facts in the world are, we should be open to revising those norms in light of the facts?

    For example, Pam and Fritz have a dog named Ireland. And they maintain that Ireland ought to come when she is called. We might wonder, however, whether Ireland has the hearing or intelligence to be capable of coming when she’s called.

    Suppose a clever philosopher were to construct the following argument:

    (1i) Ireland ought to come when she’s called.

    (2i) If Ireland ought to come when she’s called, then she can come when she’s called.

    (3i) Ireland can come when she’s called.

    Now, regardless of whether or not Ireland in fact has the capacity to come when she’s called (maybe she’s just been unmotivated to do so all this time), should we think of this as a good argument for the idea that she can?

    I think we shouldn’t. Rather, granting that 2i is true, we should use our experimental skills to assess whether 3i or something like it is the case. And once we’ve done that, we’ll know whether or not 1i is still a viable option. (The truth of 3i wouldn’t, of course, entail 1i; but it would leave it open.) This is because it’s entirely possible that Pam and Fritz were in error in holding 1i in the first place. And the question of whether or not they were in error depends precisely on the issue that we we’re trying to settle, namely, whether or not Ireland can come when she’s called.

    Grace could be up to two projects, as far as I can tell. First, perhaps she is taking the class of “beliefs” to be an intuitively given class of psychological states that it is incumbent on us to theorize about, given the evidence, and in the process perhaps we should revise our intuitions about and revise the extension of the class as we learn more. (Call this the psychological project.) If that’s what she’s up to, I don’t think her argument works. True the revisability norm might be among the intuitive givens that help us identify the class of interest in the first place, but there’s not obvious reason why we should feel attached to it, if further investigation makes it seem dubious. It could be part of the ladder of that we kick out once we’ve climbed up on it. Second, Grace could be saying something like this: anything worth calling a “belief” is something we should apply the revisability norm to, and hence anything worth calling a belief will be something that’s capable of being revised. (Call this the conceptual project.) I can’t tell if she’s more interested in the first or second project—the psychological or the conceptual. Still, even if it’s the second project she’s interested in, there will be a substantial question of whether or not the class of “beliefs” is non-empty in the actual world. And the question of whether the class is non-empty will determine if this second project is worth undertaking at all.

    So my conclusion is that on either project that Grace might be undertaking, her central argument in the project will be inconclusive. If she’s doing the psychological project, the argument will be question-begging, since the holding of her first premise just depends implicitly on accepting the conclusion (it will have flaws analogous to the flawed Ireland argument above). If she’s doing the conceptual project, the shen may have managed very well to extrapolate some further implications of our concept of belief, but there will then still be empirical work to do to establish that the class of “beliefs” is non-empty, precisely because it’s not yet established that there’s anything in the world that has the property in question, namely, revisability in response to evidence. Again, I very much think there are such states and that there are beliefs in Grace’s sense. But to summarize the present considerations, I think the question of whether we should take the notion of belief to entail revisability depends on there being sufficient empirical motivation. In other words, it depends on there being evidence that there is a kind in nature, such that, in order to capture that kind, it’s worth crafting a notion of belief that includes revisability. The question of what norms it makes sense to apply to that category of belief should come after that descriptive investigation, not before.

    Thank you.

    References

    McCloskey, M. (1983) “Intuitive Physics,” Scientific American 248(4): 122-130.

    Van Leeuwen, N. (2014) “Relgious credence is not factual belief,” Cognition 133: 698-715.


    *University of Johannesburg (Senior Fellow), University of Antwerp (Visiting)

    1. The issue of what terms to use is actually a bit of a mess, because it appears to be the case that in ordinary speech people tend to use “belief” more when they talk about ideological and religious attitudes. (Why bother talking about someone’s “belief” that a four quarters is a dollar?) Philosophers and some cognitive scientists, however, take as paradigms of “beliefs” states with contents like the chair is blue or the water from the skinny tall jar can fit in the short fat jar. Grace is of course following the latter convention, but this, of course, has the problem of making insights from psychological theory accessible to the public.
  3. I want to warmly thank both Michael Bishop and Neil Van Leeuwen for their wide-ranging and extremely insightful comments on my paper “The Revisability View of Belief.” I have benefited enormously from their feedback. Here, I will focus on their objections, starting with Bishop’s.

    1 Response to Bishop

    Bishop’s first objection stems from what he terms The Accidental Belief Problem. Suppose that some subject, S1, has a certain mental state that she is not capable of rationally revising. Through sheer happenstance, S1 never encounters evidence that conflicts with this mental state. Now suppose that a physical duplicate of S1, call her S2, has a mental state that is content-identical with S1’s mental state. Like S1, S2 cannot rationally revise her mental state, but S2 enjoys ample evidence that conflicts with her mental state. According to the view I defend, the revisability view of belief, some mental state is a belief only if that state’s subject is nomically capable of rationally revising that state in response to (actual) conflicting evidence. Thus, the revisability view permits that S1’s mental state might be a belief at the same time that it predicts that S2’s mental state is not a belief. This is highly counter-intuitive, as S1 and S2 are physical duplicates. Whatever beliefs the one has, the other should as well.

    I think that addressing the Accidental Belief Problem requires a refinement of the relevant notion of revisabilty along broadly modal lines, as follows: some state is revisable only if that state would be revised in worlds (if any) in which that state is contravened by sufficiently strong, available evidence (whether or not the actual world is one in which this state is contravened by evidence. This construal is consistent with how we think of other capacities. A glass can count as breakable in virtue of the fact that it would shatter if applied with force, even if in the actual world, it happens to never be applied with force. This modal rendering of revisability allows us to say that S1’s unrevisable mental state is not a belief, for the same reason that S2’s is not.

    Bishop’s second concern arises from what he dubs The Conjunction Problem. Suppose a subject has 17 beliefs, B1-B17, and that each of these beliefs is revisable. However, the mental state that represents the conjunction of these beliefs’ contents is not revisable. On the revisability view, the conjunctive mental state won’t itself count as a belief, even though each of B1-B17 will. Bishop suggests that this “is a result to be avoided.”

    As it turns out, I don’t wish to avoid this result. Perhaps the result is counter-intuitive (though I’m not convinced that it is), but my method does not give very much weight to pre-theoretic intuitions about belief, so I am not very troubled about that.

    More worrisome is the concern that this result shows the revisability view to in some cases distinguish between beliefs and non-beliefs in an arbitrary way (I don’t take this to be Bishop’s particular concern, but I think it may be worth discussing here). As it turns out, the distinction the view makes in the described case is principled. I am supposing that in this case, the reason the subject can’t revise the state that represents the conjunction of each of B1–B17’s contents is due to a processing limitation. She simply can’t keep all of the propositions straight in in her head long enough to consider the implications of her evidence for them. Processing limitations are a kind of nomic limitation, so the asymmetry between the conjunctive state and (say) this subject’s belief B1 is that one is saddled with a nomic limitation that the other is not. By classifying the conjunctive state as a non-belief and B1 as a belief, the view sorts the two states along a natural dimension.

    Bishop makes his final point—which I take to be less an objection than it is a suggestion about my view’s implications—by discussing what he calls The Boundary Problem. To simplify somewhat, this is the problem that there are several potentially plausible versions of ‘ought’ implies ‘can’ other than the one I focus on (such as ‘ought’ implies practically ‘can’), and at least some of these permit of different precisifications. This may suggest that in some circumstances, there will be no fact of the matter about whether a particular subject can revise a particular mental state. Since the motivation for the revisability view partly stems from ‘ought’ implies ‘can,’ “perhaps an interesting upshot of [the revisability view] is that a propositional attitude’s being a belief is not always an all-or-nothing affair.”

    I think Bishop is quite likely right that ‘ought’ implies ‘can’ can be precisified in different ways and that, given the strategy I use to argue for the revisability view, this may mean that some mental states are such that there is no fact of the matter (or perhaps merely an indeterminate fact of the matter) about whether they are beliefs. I would agree with Bishop that this is an “interesting upshot” and not something to be shied away from. So long as it’s the case that for some belief to (determinately) be a belief, it must be revisable, I’m satisfied. I don’t intend my thesis to (on my view, implausibly) entail that all mental states are either determinately beliefs or determinately non-beliefs.

    2 Response to Van Leeuwen

    Van Leeuwen’s core concern is that my style of arguing for the revisability view of belief moves from normative to descriptive facts in a way that cannot possibly succeed. He states his worry in this way:

    Should we argue from norms to descriptive facts about the world? Or should we rather say that, if the norms make presuppositions about how the facts in the world are, we should be open to revising those norms in light of the facts?

    For example, Pam and Fritz have a dog named Ireland. And they maintain that Ireland ought to come when she is called. We might wonder, however, whether Ireland has the hearing or intelligence to be capable of coming when she’s called.

    Suppose a clever philosopher were to construct the following argument:

    (1i) Ireland ought to come when she’s called.

    (2i) If Ireland ought to come when she’s called, then she can come when she’s called.

    (3i) Ireland can come when she’s called.

    Now, regardless of whether or not Ireland in fact has the capacity to come when she’s called (maybe she’s just been unmotivated to do so all this time), should we think of this as a good argument for the idea that she can?

    If my style of arguing for the revisability view of belief were relevantly similar to the “Ireland” argument, it would be in trouble indeed(!) However, unlike the “Ireland” argument, my overall strategy of arguing for the revisability view appeals both to the argument from the norm of revision and to what the world is in fact like. So the strategy does not commit us to making descriptive claims about human psychology without bothering to so much as consider what this psychology might in fact be like.

    Van Leeuwen develops his concern more explicitly by suggesting two approaches to theorizing about belief that I might be taking and suggesting that on either one, the argument from the norm of revision fails. Here is his description of the first approach:

    First, perhaps she is taking the class of “beliefs” to be an intuitively given class of psychological states that it is incumbent on us to theorize about, given the evidence, and in the process perhaps we should revise our intuitions about and revise the extension of the class as we learn more.

    This is the approach I do in fact take (I discuss this issue in section 0, where I describe this method as “provisional reliance on a cluster of core claims typically associated with beliefs”). Van Leeuwen then poses a problem for this approach:

    If that’s what she’s up to, I don’t think her argument works. True the revisability norm might be among the intuitive givens that help us identify the class of interest in the first place, but there’s no obvious reason why we should feel attached to it, if further investigation makes it seem dubious. It could be part of the ladder of that we kick out once we’ve climbed up on it.

    I agree with this strategy and with the in-principle kick-out-ability of the norm of revision. But, I argue that we don’t have to “kick out” the norm of revision because, as it turns out, the results of empirical investigation do not require this. I call the argument from the norm of revision “a strong reason” to endorse the revisability view of belief, but I do not take it to be a decisive reason. Whether we should, all things considered, endorse the revisability view further depends on whether the view would excessively narrow the class of human belief. Establishing that the view does not do this is the aim of section 3, in which I draw on empirical evidence to argue that the revisability view can permit into the class of belief a wide range of actual human mental states, including at least some irrational states.

    To put the point more starkly, if human psychology were such that no or few human mental states were revisable in the right way, we should give up on the revisability view. In such a scenario, the argument from the norm of revision would still constitute a strong reason in favor of the revisability view, but the competing requirement that we not excessively narrow the class of belief would trump this reason such that we should not, all things considered, endorse the revisability view. But happily, we don’t have to choose between the norm of revision on the one hand or a well-populated class of belief on the other. The revisability view lets us have both.

    1. Very cool paper, Grace! Along the lines of the Boundary Problem, but developing it in a different direction: I’m inclined to read the modal strength of your claim as weak (e.g., you mention possibly spending years exposed to counterevidence to the Sticky Fingers belief). Now if the claim is really that weak, does some of the practical and empirical juice drain out of the view? I’m wondering how tempted you are to make it bolder. For example, are you attracted to a Gendler-ish idea on which an implicit racist’s prejudicial attitudes are insufficiently evidence-sensitive to count as beliefs, even if with enough exposure to counterevidence those attitudes would be revised?

    2. Hi Grace: Thanks for insightful comments. I’ve enjoyed thinking about these issues. I have a worry. In response to the conjunction problem, you seem to deny the following principle:

      If S believes B1 and B2, and B1 and B2 logically imply B3, and S recognizes that B1 and B2 logically imply B3, and S is disposed to accept B3 as true, then S believes B3 as well.

      Suppose I’m at the gaming table. All night, I’ve been making three bets at a time: A bet on one proposition I believe, a bet on a second proposition I believe, and a side bet on a third proposition that I accept because I recognize that it is logically implied by the first two.

      On my final series of wagers, I bet on B1 and on B2. And then I take a side bet on B3 (which I recognize is logically implied by B1 and B2). While I can entertain B3, it’s so complex that I am nomically incapable of both entertaining and revising it (even though I am nomically capable of entertaining and revising B1 and B2). As I understand it, the Revisability View holds that B1 and B2 are beliefs, but not B3.

      If that’s right, wouldn’t a theory of belief that explained my pattern of behavior in a consistent way be superior – more unified, less ad hoc – to a theory of belief that took my last bet to have a very different explanation than all the prior ones?

      Suppose all night my sister was sitting next to me mimicking my bets. Because she’s younger, stronger and smarter than I am, B3 for her is revisable. As I understand it, the Revisability View would give consistent explanations of her behavior (she believes every proposition she bet on) but not mine (I believed every proposition I bet on but the last). The fact that revision of B3 is just beyond my nomic capacities but not hers, while true, doesn’t seem like a good reason to explain our behaviors differently.

      1. Hey Michael,

        Thanks so much for this – it’s a really great challenge for my view!

        You’re absolutely right that my reply to your Conjunction Problem means I reject the principle of belief that you sketch. I am also committed to the result that if you accept (would bet on) some claim that you take to be implied by what else you believe—but that acceptance is itself just beyond your capacity for revising it— then that state is not a belief. And I would further endorse that your slightly-better-at-cognizing sister, who can revise that same acceptance, can count as believing it.

        So, now for your first worry: “wouldn’t a theory of belief that explained my pattern of behavior in a consistent way be superior—more unified, less ad hoc—to a theory of belief that took my last bet to have a very different explanation than all the prior ones”? I agree that if the three propositions you bet on were all beliefs, that fact would explain your behavior, and in a unified way. But I would deny that this is the only way to get such an explanation. On the account I would offer, what explains the parity between your willingness to bet on the first two propositions and on the last is that you accept all three propositions — the first two you accept in virtue of believing them (I’m assuming belief is a species of cognitive attitude/acceptance). And the last you accept non-doxastically. But it is the fact that these states are all cognitive attitudes/acceptances that accounts for why you bet on all of them.

        You also describe a related worry, about how to explain the parity between your betting tendencies and your sister’s: “The fact that revision of B3 is just beyond my nomic capacities but not hers, while true, doesn’t seem like a good reason to explain our behaviors differently.” My reply here is similar to what I would say about your first worry: the fact that you and your sister have different beliefs doesn’t mean we can’t explain your behavioral similarities. We need only appeal to the fact that you and your sister both accept B3. You accept it non-doxastically, whereas she accepts it in virtue of believing it. But it is the fact of your shared acceptances that explains why you bet in the same way. We needn’t additionally posit that you two have the same beliefs.

        Do you find this convincing, or has my explanation left something about the cases unexplained?

      2. Hi Grace: Thanks for this. I think your explanation of the behaviors is fine. But now the worry mutates: What role does belief rather than acceptance have to play in explanations of behavior? Compared to belief explanations, acceptance explanations are more perspicuous and do a better job cutting nature at the joints (as your account of the betting examples shows).

        There’s a general point here that goes back to arguments proposed by Donald Davidson & Jaegwon Kim some decades back: If you place a rationality constraint on belief, you’re going to run into trouble fitting beliefs into your best empirical (i.e., descriptive) laws and explanations.

        But maybe this is a conclusion to embrace: Since beliefs are the object of epistemic norms, it turns out they don’t play a role in explaining behavior.

      3. Hey Michael,

        Yes, I think that the combination of my responses to your Conjunction Problem & betting cases does commit me to further saying that the fact that some mental state is a belief does not, in its own right, contribute to the explanation of why that state has the the motivational or action-guiding role that it has. Beliefs guide action, but only qua cognitive attitudes, not qua beliefs. I hadn’t quite realized I was so committed til forced to deal with your cases, so I’ve learned something (!)

        As it is, I’m not very troubled by the result that beliefs don’t, qua beliefs, explain action-guiding. There may be the possibility that this view makes beliefs hard to fit into a descriptive view of the mind, but I don’t quite see why that should be.

        Is the Davidson/Kim worry (or your worry) that if beliefs aren’t required to explain behavior, we have no reason to posit them? If it’s this, I would suggest that if we restrict ourselves to positing only those states required to explain behavior, we’ll end up with only one or two classes of propositional attitudes–maybe cognitive attitudes & conative attitudes or something like that. So I suspect that to be too restrictive a requirement for positing a state and that we should further allow that other things-to-be-explained, such as internal non-behavior-generating functional differences, can warrant positing a novel mental kind. This is because I’m assuming that it is not only beliefs that don’t make a special contribution to the explanation of behavior, but also entertained thoughts, suppositions, cognitive imaginings, and any other sub-category of cognitive attitude. E.g., I am presuming that a supposition won’t explain behavior qua supposition, only qua cognitive attitude. Nevertheless, these states play interestingly different internal functional roles, especially w.r.t. the kinds of circumstances that trigger them and extinguish them. E.g., we tend to relinquish suppositions when our aim in holding them has been fulfilled, but beliefs aren’t like this. And these differences in onset/extinction conditions can be empirically investigated. So I would think we have reason to posit these different varieties of cognitive state.

      4. My first inclination was to stop now that we have some nice agreement. But I’m not going to! Instead, I’m going to offer an appallingly careless, off-the-cuff suggestion – one that you should feel no obligation to answer (except perhaps with laughter):

        Once you say that belief-states play no role in explaining intentional behavior, a natural thing to say is that they play no role in causing intentional behavior, and so they can’t guide intentional behavior. (It’s acceptance or some other propositional attitude that does that.)

        What’s more, for any explanatory role you suggest for beliefs, I suspect that you can run the same sort of argument I did above: there’s some belief-like state that plays that role but that the Revisability View bars from being a belief. And so beliefs won’t explain that either. Some broader propositional attitude will.

        If all of that’s right, then I think you might have a really nice argument here for eliminativism about beliefs!

      5. Ha, well, at the risk of reintroducing (a kind of) agreement, I do see the vague suspicion lurking behind this: “for any explanatory role you suggest for beliefs, I suspect that you can run the same sort of argument I did above: there’s some belief-like state that plays that role but that the Revisability View bars from being a belief. And so beliefs won’t explain that either. Some broader propositional attitude will.”

        But offhand, I don’t quite see what non-belief state could take over the role of cognitive-attitude-that-can-respond-to-evidence. And (to really shatter the peace!) isn’t any state that plays that role necessarily a belief, whatever else it is or does?

      6. I have gone to the Indian restaurant many times (in part) because I accept that it’s food is delicious. Beliefs aren’t explaining my behavior. I think we’re in agreement here.

        Today I get highly reliable testimony that the food at the Indian restaurant is now really bad. So I no longer go to the Indian restaurant. In part that’s because I accept that the food at the Indian restaurant is no longer delicious and in fact is bad. Beliefs (again) aren’t explaining this change in behavior. If I’ve understood our discussion correctly, then we’re in agreement here.

        But doesn’t it seem that acceptance must be revisable in the light of evidence? Otherwise it’s not going to play a very useful role in explaining my change in behavior.

      7. Hey Michael,

        I dfn. see this challenge: “But doesn’t it seem that acceptance must be revisable in the light of evidence? Otherwise it’s not going to play a very useful role in explaining my change in behavior.”

        I would suggest that what explains your change in behavior is merely the fact of a shift in your acceptances. But to further explain why the acceptances shifted in response to evidence, we’d have to appeal to the fact that those acceptances are also beliefs. This commits me to saying that that same pattern of acceptances would afford the same behavior even if those acceptances hadn’t shifted in response to evidence but had shifted for some non-evidential reason.

        This might seem odd because in ordinary contexts, we sometimes do appeal to causes of changes in a mental state to explain the behavior that derives from that state. E.g., we might explain the fact that you stopped going to the Indian place by appealing to the fact that someone told you the food had gotten bad. But on my view, these explanations are loose; facts about what caused some mental state to change won’t play an ineliminable role in an explanation of behavior.

  4. Thanks so much, Eric!

    I dfn. see the concern about how weak the view should be taken to be. I think I want to stick with saying that states that respond to counter-evidence, but only after lengthy periods of exposure to that evidence, count as revisable. My reason for this is that I’m concerned here with what it takes to satisfy the norm of revision and as I see it, a state satisfies this norm so long as it rationally revises somehow, even if that revision moves at a turtle pace or occurs only after lots of exposure to evidence. I do see the question about whether that might then permit implicit biases into the class of belief, and about how weak that then makes the view.

    A couple things. First, I would be ok with implicit biases counting as beliefs, though I’m not yet sure whether they will turn out to (this is an issue I’m really interested and want to look into more). Even though implicit biases can in some cases be trained away, if this training exploits associative or Pavlovian processes, these states won’t be rationally revisable and so, will turn out on my view not to be beliefs. But if the evidence for the rational revisability of implicit biases accumulates, I’ll gladly let them into the class of belief.

    Second, implicit biases aside, I see the general worry that the revisability view must exclude some states from the class of belief—and hopefully quite a few—or risk winding up too weak to be of much interest. As it is, I’m pretty confident that at least some suppositions, moods, emotions, and perceptual experiences are not rationally revisable and so, won’t count as beliefs (it was in the context of trying to figure out what distinguishes perception from belief that I first started working on these issues). So I’m pretty confident that the view is strong enough to be interesting, even if it’s weaker than some other going accounts of belief.

  5. Thanks, Grace — that’s helpful. Persistent illusions are another interesting case. I wonder how to separate the rational from the associative. One case of particular interest to me are “illusions” so familiar that you’re not even momentarily fooled and in a way it looks right — e.g. “objects in mirror are closer than they appear” — learning, maybe — but rational or associative?

    1. I can’t help but jump in to insist that the rational/associative distinction here isn’t generally defensible unless one very carefully defines what one means by “associative”. I think we’ve learned from the old classicist/connectionist debates about architecture in the 80’s-90’s, and the methodological debates in comparative psychology more recently, that sufficiently sophisticated associative architectures can implement rational cognition. Just appealing to “Pavlovian” processes doesn’t delimit the relevant faculties, since even the most sophisticated configural, cue-competition, prediction-error, cognitive mapping type models can claim to be natural outgrowths of Pavlovian approaches to learning.

      1. Hey Cameron,

        I think you’re right about appeals to “Pavlovian” transitions not being particularly helpful. I was hoping in my piece to sidestep deep issues about whether and if we should distinguish between rational inference and association and if so, how. So the appeals to “Pavlovian” transitions is a bit of a placeholder for whatever processes, if any, turn out to be interestingly/relevantly different from rational &/or inferential ones. It seems likely that I should make that more explicit in the paper or do away altogether with the “Pavlovian” label.

        In fact, I think it’s possible that the kind of distinction I should appeal to is less an inferential/associative one, than it is a distinction between transitions that result from personal-level information and transitions that result from sub-personal information. On some Bayesian models, perception is deeply inferential and/or rational in a sense, but some or a lot of the inputs that drive perceptual change are still going to be sub-personal, so I wouldn’t take these processes to make perception revisable in the way I claim belief is. I want belief be revisable in response to personal-level information (I don’t make this explicit in the current version of the paper, but I think I should). So, if there’s the difference between perception and belief that I claim there is, it may be because perception at least sometimes isn’t revised in response to personal-level information (and not necessarily because perception is not rational).

  6. Hey Eric, Yes, I agree that illusions we somehow adjust to are an interesting case. And someone who likes what I say about belief and wants to use it to figure out what’s perceptual and what’s belief may need to attend to whether adjusted illusions are associatvely or rationally produced.

    As it is, I myself have a very different take on adjusted illusions. I think the perceptual experience remains the same, but what you glean from it changes. Maybe to the experienced driver, cars in the rear view mirror really look to be about as far away they are in fact are, not further away. But I’m not convinced of this.

    Think about what would happen if one small region of an experienced driver’s mirror were to suddenly stop having the usual distancing effect and to start making cars look as far away as they in fact are. So for this driver, some cars will be reflected by the distancing-effect bit of the mirror and some cars will be reflected by the no-distancing-effect bit. I think that this will make the latter cars look much closer than the other cars. This suggests to me that the experienced driver’s visual experience doesn’t really represent the cars as at-the-right-distance. It’s only her inferences about her visual experience that adjust to the evidence.

    But of course, this is a complex and old, old problems, so much more to say (!)

    1. Thanks — yes it’s a mess of an issue, and one we’re not going to straighten out here! One possibility in your revised mirror case is that a car’s occupying more visual arc makes it look like it’s in a different angular position (say farther to the left, where the cars generally are when their reflections hit that part of the mirror) rather than a different distance.

      This isn’t to suggest that the driver couldn’t stop and notice that the car occupies more visual arc, or that the car seems to occupy less visual arc than it in fact does. Rather, I’d suggest that what the driver learns is that there’s more than one way of looking to be 60 feet distant — a flat-mirror way and a curved-mirror way, both equally veridical.

      (BTW, I have a paper that discusses this case in some detail: “The Problem of Known Illusion and the Resemblance of Experience to Reality” (Philosophy of Science 2014). But don’t feel obliged to go read it before replying, if you choose to reply. I suspect that what I’ve said above is already enough for you to get an adequate sense of my gist!)

      1. By the way, aspheric driver’s side mirrors are being developed that are flat in one part and curved toward the outside to reduce driver’s side blindspot, and the preliminary evidence is that people seem to adjust to them pretty well. So your case is actual!

      2. Hmm, that’s very interesting! I’ll have to think about it more, but yes, it’s clearly even more complex than I’d thought. One thing I’d wonder about–and maybe you know of some results that relate to this–is whether, e.g., with the aspheric mirrors, there is evidence that subjects have suppressed behavioral tendencies that conflict with their displayed, adjusted tendencies? e.g., do these subjects show signs of reduced processing capacity, suggesting effortful suppression? Or is their adjustment total, so to speak?

        Depending on how this debate turns out, it may be that my case for distinguishing perception and belief will turn out after all to hinge on something like a an associative/rational process distinction (or, with a nod to Cameron’s worry that this distinction might not turn out to track much, maybe a distinction between rational transitions mediated by personal-level information vs. rational transitions mediated by subpersonal-level info).

      3. I haven’t checked in the last couple years on aspheric mirrors — my guess is that the research isn’t quite that fine-grained.

        It would be very interesting if the belief/perception distinction turned on the rational/associative distinction. I think both distinctions are rather a mess, so that would make it one giant globular mess instead of two separate messes!

  7. Hi Grace,

    Thanks for the paper!

    I wanted to ask you about how to apply the revisability test. In your paper, you consider only faith-based or neurologically induced cases of unrevisability. But there seem to be other cases of beliefs that are shielded from revision, which are more benign.

    What I have in mind is the Duhem/Quine phenomenon. Suppose I hold (i) that it is essential to something being a swan that it be white, and (ii) that there are no swans native to Australia. Then, no matter how pictures of Australian swans you show me, I will not be convinced to give up (ii), since the birds you show me will be black — and so do not count as swans by my lights. In other words, (i) shields (ii) from disconfirmation. (This is a trivial example, but it shouldn’t be too hard to find more serious ones, e.g. from the history of science.)

    So, so long as my doxastic state in the worlds we consider is allowed to vary only with respect to what evidence against (ii) I get, the result will seem to be that (ii) is not a belief. So it looks like we will need to consider worlds in which I don’t hold (i) as well.

    Does this seem right? Are we allowed to vary the subject’s other beliefs at will, for the purposes of the test?

    1. Hey Markos, that’s a really interesting question! In the case you describe, it does seem that we should treat your representation “there are no native swans in Australia” as revisable. But the question is how & whether my theory can get us to that result in a non-ad hoc way.

      In the paper, I characterize revisability as the capacity to be revised in response to available counter-evidence, and it’s the availability condition that I think can help us get the right result in your ‘swan’ case. I want to say the evidence from the photographs isn’t available to you—due to your false belief that all swans are white—so the fact that you don’t revise your view in response to the evidence doesn’t show that that view isn’t revisable. It shows us nothing at all. To check whether that view is revisable, we’d have to look at worlds where the evidence is available to you, and those worlds will be ones where you don’t have the false belief that all swans are white.

      So, yes, I’d want to vary the false belief to check revisability, as you suggest, and I also think/hope that approach follows pretty naturally from the availability condition on relevant counter-evidence. I further think/hope that the availability condition follows pretty naturally from some of the considerations I draw on to argue for the revisability view, viz. considerations about when beliefs ought to be revised.

      What do you think?

      1. Hi Grace,

        Thanks for the reply, that does make sense to me. I guess the confirmation bias cases you consider might also work a bit like this, since in order to get people to acknowledge the evidence against their views often the best way it to re-frame the issues in less charged ways.

        One result might be that the criterion becomes even more flexible. For example, your example of some cases of religious faith would seem to clearly fall on the side of belief now: I suspect even for the most committed theist there is some combination of beliefs such that, had they held those beliefs, they would be open to changing their minds. (But perhaps that is the right result anyway.)

  8. Hi Grace, thanks for this really interesting paper. I find it very persuasive! — though I’ve not yet read Mike’s and Neil’s commentaries, so as your view predicts, this assessment could change. Here are two very quick thoughts:

    • First, I wonder whether you’re open to the idea that belief might come in degrees, not just of confidence, but of capability of responding rationally to evidence. (I hope it’s clear enough what I’m after, but I could supply an example if needed.) Maybe there’s something in the metaphysics of capabilities that rules this out, but if it’s possible for X’s capability to F to come in degrees, then the same might be true of beliefs, in which case a person would believe something more or less depending on how able she is to revise this belief rationally. Do you think that’s a coherent possibility? Do you think it’s compatible with our folk-psychological concept of believing?

    • Second, and really quickly, in connection with your brief discussion of God’s beliefs: there’s an old paper by William Alston called “Does God Have Beliefs?” where he argues on grounds similar to these that God’s knowledge doesn’t involve His having any beliefs at all. I don’t think what he says there is directly relevant to your main argument, but it might be worth checking out.

  9. Hey John, thanks so much for these questions!

    So, as for first one, yes, I do think belief might come in degrees. In the paper, I restrict myself to the claim that for some state to be (determinately) a belief, it must be (determinately) revisable. But I take that result to be consistent with the possibility that other states might be indeterminate w.r.t. belief status (or come in degrees, if that’s different), and perhaps even because they’re indeterminate w.r.t. revisability. (in his comments, Michael gives some nice examples of cases where it’s plausibly indeterminate as to whether some state is revisable).

    As to whether indeterminate or graded beliefs are folk psychology friendly, I don’t know. But as is probably clear from the kinds of things I wind up saying in my paper, I’m comfortable doing quite a bit of violence to folk psychology (!) E.g., I think a lot of people would count their faith-based religious views as beliefs, but I think they’ve got that wrong–or at least, that in the sense of ‘belief’ relevant to epistemology, they’ve got that wrong.

    As to the second question, I would take the revisability view to be consistent with the possibility that an omniscient being has beliefs. This is because I count states that are never contravened by evidence as trivially rationally revisable, i.v.o. never failing to be revised in response to counter-evidence. So God’s views might still count as beliefs, even if God never has counter-evidence for those views. It’s interesting to know that Alston comes to a different conclusion. I dfn will be checking out his argument.

  10. Hi Grace, I have a question about your discussion of religious acceptances. Do you think, as I suppose Neil does, that the hypothetical characterization you offer in sections 3.2 and 3.3 is plausibly an accurate construal of what many people’s religious attitudes are like? Because while the case you describe seems psychologically possible to me, I find it likely that most people’s religious (and firmly-held non-religious) attitudes are sustained instead by mechanisms like confirmation bias, selective attention, a skill at finding (real or merely apparent) holes in objections and counterarguments, a tendency to avoid evidentially threatening situations, and so on. If that’s right, then even if in practice there’s not going to be anything that would unsettle them (and even this assumption seems a bit too much), would these attitudes count as beliefs on your view? (And Neil, would they count as “factual beliefs” on yours?) Obviously there are two questions here, one analytic/conceptual and the other empirical/descriptive, and my immediate interest is more in the former than the latter, which would take a lot of empirical data to address.

    Another way to put the question would be to note that, at least to the extent that I understand it, on certain sorts of Bayesian views a person’s priors might be such that just about (or maybe even: absolutely?) any evidence against a given proposition P will be outweighed by the prior-supported evidence for it. (Al Plantinga often appeals to a view like this in explaining how religious convictions can be subjectively rational.) Given that priors aren’t the sorts of things that evidence can undermine or support, I suppose you wouldn’t count them as beliefs, which seems plausible enough to me. But what about these thoroughly-prior-supported attitudes that will be (at least subjectively) supported by the evidence come what may? Would these qualify as beliefs on your conception?

    1. Hi John,

      Thanks so much for these questions.

      As for whether I think that there are in fact unrevisable religious views–I’m not at all sure. In my early thinking on this topic, I’d supposed there were not states like this. And in the paper, I take myself to be addressing the question “If there are unrevisable religious views, can your view accommodate them in a satisfying way?”

      The question of whether there are in fact unrevisable religious views is a point where Neil and I may turn out to differ. Neil posits that there are such states. He calls them “religious credences,” and these are meant to differ from “factual beliefs”–like the belief “Maine is a state in the U.S.”–in a number of respects, one of which is to do with evidence-responsiveness. I am not yet convinced that (any) religious views are genuinely not evidence-responsive. Having finally read Neil’s (extremely wide-ranging and original) paper where he defends this view, I think I may be inclined to explain some of the data he considers differently than he does, and in a way that preserves the evidence-responsiveness of religious views. But there is a lot to say on this (including that Neil may be using “evidence-responsive” in a different way than me), and I am open to Neil’s convincing me, esp. as I take it that the evidence he discusses in the paper is meant to support a preliminary and not decisive case for his view. (Neil, please chime in if I am misrepresenting your thoughts on this!)

      As for your other question about whether my view counts “thoroughly-prior-supported attitudes” as beliefs, I think I would like to exclude these states from the class of belief, because it sounds like they really can’t be revised despite counter-evidence. The details matter. If what’s really going on with these states is that they are never contravened by evidence, they can count as revisable in a trivial way, i.v.o. never failing to revise in response to counter-evidence.

      1. Thanks, Grace. But let me push you on the last point. Your condition of evidence-responsiveness has to do with the possibility of rational revision, and for a Bayesian of the sort I have in mind the rationality of revision is always relative to one’s prior assumptions. This means that as a matter of fact (though not of principle) there might be belief-like attitudes so supported by one’s priors that no revision of them would be rational. These attitudes would, I suppose, be sort of like you imagine God’s beliefs to be, with the important provision that they might be wrong or ill-founded. Would you think that they, too, were beliefs?

      2. Hey John,

        If there are states that are such that no revision of them is ever rationally required, due to their being supported by high priors, then yes, you’re exactly right that I would treat them in the way I treat God’s beliefs–as trivially rationally revisable i.v.o. never being contravened by sufficiently strong evidence. However, I very much doubt that there are even possible states like this — so long as the high prior is below a credence of 1, there must be, at least in principle, evidence that would be sufficiently strong to require revision of those states, right? For mental states like this, checking their revisability requires seeing whether they would be revised in those worlds where they are contravened by such evidence. So these states won’t count as trivially revisable.

        Maybe beliefs in a priori, necessary claims, like “2+2 =4” are such that they can’t even in principle be contravened by evidence strong enough to warrant their revision. I’m not sure of this. But if there are such states, then I’d be happy to treat them as trivially rationally revisable, in virtue of not permitting of sufficiently strong counter-evidence.

        Does this speak to your worry? Are there not-revisable-in-virtue-of-high-priors states I’ve missed?

      3. Hi Grace, yes, this seems right to me: it’s only if the subject has prior credences of 1 that P and P->Q that her belief in Q will be unrevisable in this way — and that seems really unlikely, though once again even very high but sub-1 priors of this sort will allow the beliefs to prevail against a wide range of evidential challenges.

        I was, though, going to suggest just the opposite of what you say in this comment about necessary truths, namely that even if in fact there can never be any good evidence against these still a subject might seem to have such evidence (e.g. suppose I read Yalcin’s (2012) alleged counterexample to modus pollens, and it strikes me as compelling reason to reject the principle; or — less reasonably — take Russell’s paradox to be a good reason to reject the truths of mathematics; etc.), in which case if her acceptance of these truths is a matter of belief, she’ll revise her confidence downward. But I take it you’d be open to that story as well, right?

  11. Hi Grace, thanks for this thought-provoking essay.
    (1) On p. 28 you talk about “emotionally underpinned beliefs” as a special class, and talk about beliefs that in principle can be “divorced from feeling”. The accuracy motive is not different in kind from the ideological motive, at least not in the sense that there is nothing we can call ‘affect’ in the former case. What is the source of the satisfaction we get from solving a math problem or crossword puzzle? To engage in some evol psych speculation, knowing we can be accurate is comforting because of the obvious adaptive advantages of accuracy in many contexts. Comfort also arises, though, from feeling part of a group, feeling the natural world is guided by a loving hand, etc. Either way, incoming dissonant info that is unexpected (thus undermining comfort we feel in having a good grasp of the world around us) or conflicts with other motivated states (my child is no thief, god is good) causes anxiety and calls for resolution.
    (2) Who says faith-based beliefs are not revisable? They are very difficult indeed to revise with argument, but much less difficult to revise with emotional experience. People ‘lose their faith’ all the time in response to extreme events like death of their child. I agree with Schwenkler that religious beliefs are sustained by conf bias, etc., though not impenetrably so given the right push. For evidence on revisions to ideological belief, see Redlawsk, D. P., Civettini, A. J. W. and Emmerson, K. M. (2010), “The Affective Tipping Point: Do Motivated Reasoners Ever ‘Get It’”? Political Psychology, 31: 563–593; and Wright, S. C. (2009) “Cross-group contact effects”, in S. Otten, T. Kessler & K. Sassenberg (Eds.), Intergroup relations: The role of emotion and motivation (pp. 262–283); and Sinatra, G. M., Kienhues, D. & Hofer, B. (2014), “Addressing challenges to public understanding of science: Epistemic cognition, motivated reasoning, and conceptual change.” Educational Psychologist. NVL is not wrong in that religious belief seems to involve some elements of imaginative play, but I think it is still on a continuum with other ‘beliefs’. The psychology is a mish-mash (an actual term used by Kant!) I also agree with NVL that the armchair conceptual project of definitively deciding what is a belief or not takes the folk category ‘belief’ too seriously.

  12. Hey Adrian,

    Thanks so much for these thoughts (and also for the helpful references)!

    As to the point/question about emotionally underpinned beliefs, I did not mean to suggest that other beliefs (and perhaps all beliefs?) aren’t emotionally underpinned. I may need to clarify that point in the paper. In fact, I would presume that most or all beliefs are importantly connected to affect. In discussing cognitive dissonance as a mechanism of belief change, I certainly did not mean to suggest that this mechanism is specific to beliefs like “my daughter is a good person.” In fact, I suspect that mechanism to be ecumenical. I merely meant to be considering beliefs that are especially tightly connected to powerful emotion, for the purposes of clarifying the implications of my view.

    As to the second question, “Who says faith-based beliefs are not revisable?” Other than Neil, who I take it does say this, I’m not sure who endorses this (Neil may be able to help out here). In presenting this paper to other audiences, many people have suggested this view, though I’m not sure how many accepted that there are such states vs. were merely curious what my view would say about such states.

    I’m not sure whether you meant to be asking if I myself think religious views are revisable, but in case you are, I’ll reiterate what I happen to have just now posted in response to John’s question: I have myself tended to presume that faith-based views are revisable. In the paper, I don’t mean to be taking a stand on the issue but to be responding to the question, “If there are religious views that aren’t revisable, can your view accommodate them in an adequate way?” If there are unrevisable faith-based acceptances, my view excludes them from the class of belief, and I have argued that this would be the right result.

    Finally, in connection with this: “I also agree with [Neil Van Leeuwen] that the armchair conceptual project of definitively deciding what is a belief or not takes the folk category ‘belief’ too seriously.” I wasn’t sure if this was meant to connect to my own views or was just an aside but to clarify: I do not take my view to capture a folk psychological notion of belief. Far from it. I do take it to capture what epistemologists mean about belief–or what they should mean–if they are to maintain two very attractive views, viz. the norm of revision and the particular kind of ‘ought’ implies ‘can’ I defend.

  13. Hi Grace, John, Adrian, and other friends,

    I just want to clarify what I think is interesting about the revisability or lack thereof of religious credences (my term of art for what most people seem to be after with the phrase “religious belief”).

    When I say that religious credences are not evidentially vulnerable, I have a very specific meaning in mind for “evidentially vulnerable.” One of the requirements on being evidentially vulnerable in my sense is that contrary evidence has a tendency to extinguish states that have it by non-voluntary processes. I think most factual beliefs are evidentially vulnerable in this way. If I thought your car was red and then see a white car in your garage and learn that that’s your only car, the contrary evidence extinguishes my belief whether I like it or not. (See p. 14 for exact definition: https://www.academia.edu/8126466/Religious_Credence_is_not_Factual_Belief )

    So when I say that religious credence is not evidentially vulnerable, it doesn’t actually entail that no one ever gives up religious credences due to confrontation with evidence. I think this does happen, although it’s not so common. But when it does happen, it is by different processes from those that control evidence-based revision of what I call factual beliefs. Giving up religious credences is much more of a choice, even if it is a choice based on evidence.*

    So there is a substantial difference in the psychological processes by which religious credences, as opposed to factual beliefs, relate to evidence.

    To relate this to Grace’s view, it appears on reflection that my requirement that factual beliefs be evidentially vulnerable is somewhat stricter than Grace’s requirement that beliefs be revisable. She is making revisability in response to evidence a necessary condition on a state’s being a belief. I go one step further and require that the revisability be in virtue of non-voluntary processes.

    Grace, does that characterization of the relation between our views sound right?

    Best,
    Neil

    Note: since religious credence and factual belief are attitude concepts, and since attitude and content are independent dimensions of a mental state, it is perfectly possible for there to be factual beliefs with religious contents (I think this happens, though it’s probably empirically fairly rare). So if you find yourself thinking of a “religious belief” that was given up *involuntarily in response to evidence, then it may be that this was just a factual belief with religious content. (My thesis about religious credence, just to be clear, is an empirically-motivated existential claim; it doesn’t rule out the possibility of other things that may also go under the name “religious belief.”)

    1. Hey Neil,

      Thanks for this. And yes, since I characterize revisability in a way that is neutral as to voluntariness, this sounds exactly right: “To relate this to Grace’s view, it appears on reflection that my requirement that factual beliefs be evidentially vulnerable is somewhat stricter than Grace’s requirement that beliefs be revisable. She is making revisability in response to evidence a necessary condition on a state’s being a belief. I go one step further and require that the revisability be in virtue of non-voluntary processes.”

Comments are closed.