Against Intellectualist Theories of Belief

Jack Marley-Payne, MIT

[PDF of Jack Marley-Payne’s paper]

[Jump to Keith Frankish’s commentary]
[Jump to Eric Schwitzgebel’s commentary]

Abstract

Belief has long been held to be connected with both speech and action. However, cases of conflicting behaviour show that only one of these connections can be constitutive. Intellectualism is the view that the connection between belief and speech (and also conscious judgement) is to be prioritized. And, therefore, subjects with conflicting behaviour believe what they say. A prima facie compelling motivation for the view is the claim that beliefs are intelligent states, and only states involved in linguistic and conscious processes are sophisticated enough to meet the appropriate standard of intelligence. In this paper I plan to examine this line of thought and argue that it is mistaken.

1. Introduction

We say what we believe to be true. Moreover, our beliefs guide our actions. As rules of thumb these claims are beyond reproach. Indeed they seem so plausible, it’s tempting to infer that they form constitutive conditions for belief. However, this can’t be so: the two are incompatible, when taken as fully general principles. This is revealed by cases of conflicting behaviour in which a subject is disposed to assert one thing, and indeed endorse it upon reflection, while behaving as though she believes the opposite – for example, an elite sportsplayer who makes incorrect statements about how one should play. Therefore, there must either be some beliefs that are not manifested in verbal behaviour or there must be some action guiding states that are not beliefs. We must choose whether our conception of belief is primarily connected with language and deliberation, or with action.

Intellectualism is the view that the connection between belief and speech (and also conscious judgement) is to be prioritized. And, therefore, subjects with conflicting behaviour believe what they say. A prima facie compelling motivation for the view is the claim that beliefs are intelligent states, and only states involved in linguistic and conscious processes are sophisticated enough to meet the appropriate standard of intelligence. In this paper I plan to examine this line of thought and argue that it is mistaken. In short, the reason for this is that verbal states are less sophisticated than might initially be supposed and non-verbal states are more so: both meet criteria for sophistication imperfectly. Therefore, both look equally good candidates to qualify as beliefs.

My argument will procede as follows. First I will describe some cases of conflicting behaviour and the account of them offered by intellectualism (§2+3). Then I will look at two ways to make the argument for intellectualism precise, defended by Gendler (2008a) and Stich (1979) – who focus on evidence sensitivity and inferential integration resepctively as criteria for sophistication (§4+5). My response will be to look at a number of examples that illustrate the imperfect ways in which our mental states meet these conditions. I will conclude by suggesting that we should move away from the intellectualist view entirely and adopt an action based account of belief.

2. Cases of Conflicting Behaviour

The examples of concern are ones in which a subject’s verbal behaviour and conscious judgment indicates belief in a proposition, while her non-verbal non-conscious behaviour indicates a belief in that proposition’s negation. Note that conscious judgment is naturally grouped together with linguistic behaviour as the internal analogue of assertion – a subject will consciously affirm a proposition to herself when she’s unwilling or unable to assert it aloud. I’ll call verbal behaviour and counscious judgment intellectual behaviour, and other kinds non-intellectual behaviour. Now, consider the following cases:

Skywalk: Over the Grand Canyon there is a giant glass bridge that members of the public may walk out onto. The average subject will be well informed of the strength of glass of this thickness; moreover she will sincerely assert the sentence ‘I am in no danger, the bridge will support my weight’. And yet despite this, while on the bridge she trembles and sweats and is eager to get across it as soon as possible.1

Implicit Bias: Psychological studies reveal that many white Americans who profess to being committed to racial equality discriminate against black people in their unconscious behaviour. They sincerely assert that black people deserve equal treatment, are equally trustworthy etc. And yet they are less willing to make eye contact, stand further away and display other ‘micro-aggressions’ during social interactions; they are also less likely to hire black candidates relative to the quality of their CV and discriminate in numerous other ways.1

Roadworks: Ben is told that due to road works, the bridge he normally takes on his way to work will be closed. Upon hearing this, he thinks to himself (consciously) that he will have to take the roundabout route. He is disposed to sincerely assert this to anyone who asks him how he’ll be travelling to work in the next week. However, when he drives to work, he’s disposed to set off on the old route, not leave extra time etc.1

The subject on the skywalk has intellectual behaviour suggesting she believes that she is safe, while her non-intellectual behaviour (her trembling and sweating) suggests a belief that she’s in danger. The implicitly biased subject’s intellectual behaviour indicates belief that black people are equally trustworthy and deserve equal treatment to white people, while the unconscious behaviour indicates a belief that black people are worthy of suspicion. Ben’s intellectual behaviour indicates a belief that the bridge is closed while his non-intellectual behaviour indicates a belief that it is open. The question, in light of this is: what do the subjects actually believe?

To put things neutrally, the subjects have two ‘belief-like states’ – the one guiding intellectual behaviour and the other guiding non-intellectual behaviour. I’ll refer to these as intellectual states and non-intellectual states respectively. So our question can be rephrased as asking: which of these belief-like states are actually beliefs? Both the intellectual and non-intellectual states have an action guiding role appropriate for belief, while (by construction) only the intellectual states have contents the subjects can articulate.4 Therefore, deciding what is to be said about cases of conflicting behaviour requires taking a stand on whether belief is primarily tied to action, or to speech.

3. Introducing Intellectualism

Intellectualism offers a way of understanding these cases. It is the thesis that the subjects believe only what they say (and consciously judge) in these examples – or more generally the claim that only intellectual states are beliefs.5 So according to the view, the subject on the skywalk believes that she’s safe, the implicitly biased subject believes that all races are equal, Ben believes that the bridge is closed – and none of the subjects believe the proposition indicated by their non-intellectual behaviour. Though there are many dissenters,6 intellectualism has been influentially advocated by a number of philosophers. Here are some notable endorsements of the view:

[I]f a subject is psychologically (and physiologically) normal, inclined to be cooperative and has no motivation to deceive us, then if she believes that p and is asked whether p is the case, she will generally say that it is… [In some cases] the subject may be temporarily paralyzed and thus unable to assent to anything. Or he may have a strong desire to mislead his questioner, or simply wish to say nothing. Still, under these circumstances, if we ask a subject whether p is the case, he will generally have a certain sort of characteristic experience…One might also describe the experience as being aware that p or being conscious that p. [Stich (1978) pp 40-41]

[In cases of conflicting behaviour] our beliefs and desires mandate pursuing behaviour B and abstaining from behaviour A, but we nonetheless find ourselves acting – or feeling a propensity to act – in A-like ways… It seems misleading to describe these A-like behaviours as fully intentional [i.e. as manifestations of belief]: their pursuit runs contrary to what our reflective commitments mandate. [Gendler (2012) p 799.]7

[B]eliefs and other cognitive states are analyzed in terms of their dispositions to cause phenomenally conscious episodes of judgment, rather than their dispositions to cause physical behavior. [Smithies (2012) p 348]

[T]he most straightforward expression of a belief is an assertion…beliefs, recognitions and so on, are going to be ascribed to animals in an impoverished and … a somewhat conventionalised sense. [Williams pp 139-140]8

Since intellectualism requires denying a plausible seeming principle about the connection between belief and action, the view requires defense. One might simply rely on intuitions about cases but this would not be a particularly promising strategy since such intuitions would be highly contestable. Much more interesting is if a principled argument can be made for the position. As was mentioned in the introduction, a compelling motivation for the view is the idea that only intellectual states are sufficiently intelligent to qualify as beliefs. Clearly some behaviour guiding states, such as our reflexes, are too crude to be beliefs – so it’s highly plausible to think belief entails a certain degree of sophistication. If one can show that only intellectual states meet this condition, one will have a compelling argument for intellectualism.

Sophistication is a loose notion, so one needs to work with a more precise criterion for belief if one is to assess this line of argument. Two promising strategies here have been to appeal to evidence sensitivity and inferential integration as necessary conditions for belief.9 These seem like a reasonable gloss on sophistication, since responsiveness to the environment and to previously obtained information distinguish an intelligent process from an automatic one. It also seems prima facie plausible that intellectual behaviour has a privileged connection with these features. For example, it is through reading and writing that scientific opinions are expressed and communicated. This linguistic behaviour is sensitive to incredibly complex experimental evidence, as well as the detailed testimony of other researchers; and, it is in writing that researchers tend to work through long chains of inferences when, for example, doing mathematical proofs.

Despite their plausibility, I think both versions of this argument fail, as I’ll now argue. Both intellectual and non-intellectual states are imperfectly evidence sensitive and inferentially integrated. Thus, considerations of sophistication do not count in favour of intellectualism.

4. Evidence Sensitivity

The argument centred on evidence sensitivity is given by Gendler, who writes:

[W]hatever belief is—it is normatively governed by the following constraint: belief aims to ‘track truth’ in the sense that belief is subject to immediate revision in the face of changes in our all-things-considered evidence… In each of the cases we have been considering, only one of the competing tendencies is evidence-sensitive in this way. The man [on the skywalk] believes that he is safe because if he were to gain evidence to the contrary, his attitude would change accordingly.10

To state the argument explicitly:

1) All beliefs must be appropriately evidence sensitive

2) Only intellectual states are appropriately evidence sensitive

C) All beliefs are intellectual states

For this argument to be successful, the intellectualist must specify what level of evidence sensitivity is ‘appropriate’ for belief. A promising starting point is the fact that in all the cases of conflicting behaviour we’ve discussed, only the intellectual state is sensitive to linguistic evidence. For example, the subject on the skywalk’s disposition to assert she’s safe is sensitive to whether the staff tell her that she is; while her trembling and sweating is unaffected by such testimony. One might argue, therefore, that beliefs must be sensitive to all the evidence, and since the non-intellectual states are not sensitive to linguistic evidence they cannot be a manifestation of belief.

Sensitivity to all evidence is a very stringent condition for belief so one might be tempted to weaken it. One option would be to go with the minimal constraint that beliefs must be sensitive to some evidence – and argue that non-intellectual states do not even satisfy this. Another option would be to adopt an intermediate condition: that beliefs must be sensitive to evidence in a sufficiently uniform and consistent manner; it might be thought that our ability to grasp linguistic evidence in a systematic fashion indicates this. Or, following Gendler, one could emphasise immediacy – she claims that belief is ‘subject to immediate revision’: that it can ‘turn on a dime’.11 This is meant to contrast with the slow habitual changes in non-conscious behaviour. Here, then, are four alternative theses that could be used to flesh out the argument for intellectualism:

      1. Beliefs must be sensitive to all evidence.
      2. Beliefs must be sensitive to evidence in a unified and systematic manner.
      3. Beliefs must be immediately revisable in response to evidence.
      4. Beliefs must be sensitive to some evidence. 

The intellectualist must then argue that the intellectual states meet the condition of choice while the non-intellectual states do not, thus securing premise two of the argument. None of these theses can be upheld however: even intellectual states fail to meet the criteria of E1/E2/E3, while many non-intellectual states satisfy E4.12 I will first concentrate on the relationship between linguistic behaviour and evidence and then move onto other types of intellectual behaviour.

4.1 Linguistic Behaviour

Consider the following examples:

Bob the baseball fielder: Bob is waiting to catch the baseball. As the ball is struck he is unable to say where it is going to land; however, if a reliable source were to tell him the coordinates of where it was heading, he would then be disposed to relay this information when asked. On the other hand, the visual evidence of seeing the ball’s trajectory after it’s hit does not help; he is still unable to state the location – though he is able to move himself to the right place in order to catch it.13

This illustrates that often our intellectual states are sensitive to linguistic evidence but not to perceptual evidence. When it comes to the issue of where the ball is going to land, only Bob’s non-intellectual states are sensitive to the relevant visual evidence.

Sam the teenager: Sam makes assertions about what work he must do to get a good job. He says things like “I don’t need to work much at all to get a job: my friends’ older brothers all did far less work than me and they’re all lawyers and bankers now.” However, the intellectual states underlying these assertions are not sensitive to all linguistic evidence. If his friends were to tell him that the job market has changed and he needs to get some internships if he wants to succeed, he would revise what he said. However, if his family were to tell him the very same thing he would not change at all.14

I take it that this type of example is not unfamiliar. It illustrates that sensitivity to linguistic evidence is not an all or nothing matter even amongst intellectual states.15

Rich the carnivore: Rich asserts that there’s nothing objectionable about eating meat, and indeed he eats lots of it. No amount of testimony as to how cruel the meat industry is will make him change what he says. However, if he were to visit a slaughterhouse and see the cruelty (the very same processes that he has already been told about) what he’d be disposed to assert would change.

This shows that what we are willing to say and endorse may not be sensitive to testimony but only perceptual evidence. Again, I take it to be a common phenomenon that seeing something will change our behaviour whereas being told about it will not.

Thomas the slow learner: Thomas is taking a set theory course and struggling a little bit. He’s been told some strange things that he didn’t believe before, such as that there are ‘different sizes of infinity’; and he’s been shown watertight proofs that demonstrate the claims. When he hears these things first time round he’s just confused and doesn’t believe what he’s told. However, he is studious and reads through the proofs multiple times until eventually he ‘gets it’ and believes the theorems.

This shows that linguistic behaviour is not always immediately revisable in the face of evidence which mandates a change. Sometimes (especially in cases where the fact being indicated is especially radical, surprising or strange) it can take a while before what we’ve seen or been told sinks in and we finally change what we are disposed to assert. This phenomenon has been identified and studied in social psychology, and is known as belief perseverance.16

These examples together show that intellectual states fail to satisfy the criteria set by E1-E3 – at least when we restrict our attention to their manifestations in linguistic behaviour. Sam the teenager, Rich the carnivore and Bob the baseball player all illustrate how intellectual states are not sensitive to all evidence (E1). Moreover, Rich shows how intellectual states may not be sensitive to linguistic evidence, and Sam shows that our interactions with linguistic evidence may be messy (E2). Finally, Thomas shows how linguistic dispositions need not be immediately be sensitive to evidence (E3).

4.2 Other Intellectual Behaviour

A natural response to these examples is to point out that intellectual behaviour does not just mean linguistic behaviour. Recall that the various defenders of intellectualism differ as to which types of behaviour they take to be central to belief. First, as has already been discussed, there is the distinction between conscious judgments and overt linguistic behaviour. Second, one can distinguish simple assertion from reflective endorsement – that is, an assertion which is made on the basis of a deliberative process and such that one could offer reasons in support of it. Perhaps if we look at a broader range of phenomena, a connection between intellectual behaviour and evidence will emerge.

It might be suggested that only one type of intellectual behaviour is sensitive to evidence in the requisite sense and that it alone is the mark of belief. Alternatively one might claim that the various types of behaviour tend to come as a package. Though any of these taken in isolation might not seem to have a special kind of evidence sensitivity, taken together they do. I think, though, that neither of these strategies can succeed. None of the types of behaviour have a sufficiently strong connection with evidence and they can all come apart from each other. Turning first to reflective endorsement, consider the following example:

Malcolm the snap-judger: Malcolm is a successful businessman who each day has to make dozens of assertions about people’s character, fitness for a particular job, susceptibility to particular forms of persuasion etc. on the basis of quick decisions. If he were to reflect on any particular assertion he had made, he would often not endorse it; his on balance judgment would go against his unreflective assertion. However, at least in the sphere of business decisions, Malcolm’s snap-assertions are more accurate than his reflective judgements.

This shows how our sincere assertions can come apart from what we reflectively endorse; and that, at least some of the time, what we say unreflectively can be more sensitive to the evidence than what we are disposed to endorse after deliberation. There is empirical evidence backing up the claim that cases like this are quite commonplace. For example, Halberstadt and Levine (1999) asked a group of basketball experts to predict the outcome of a basketball game: half were asked to give and analyse reasons before making their prediction and half were not; those who had to give reasons were less accurate on the whole that those who did not do so. Similarly, Wilson et al (1984) asked subjects who were in a relationship to predict how long it would last, with half giving reasons and half not. Again, it turned out that the predictions of the control group were significantly more accurate.

Of course there are also times when judgments arrived at reflectively are better attuned to the evidence than unreflective assertions. For example, when the subject matter of the assertion is some sophisticated area of science. The correct conclusion is that both types of behaviour are imperfectly sensitive to the evidence, and which fares better varies depending on the circumstances.

The next thing to consider is the relationship between assertion and conscious judgment. It might seem that these two are tricky to pull apart since we are almost always aware of what we are saying. Perhaps there are cases of talking on complete autopilot where we are totally oblivious of the words coming out of our mouth, but the separation here is shallow – we could quite easily become conscious of what we were saying by refocusing our attention. However, this only demonstrates a correlation between assertion and linguistic conscious judgment – internal ‘sayings to oneself’. There are other conscious occurrences that have a claim to being a manifestation of belief and that do come apart from what we say. Consider the following case:

Maya the navigator: Maya lives in Brussels – a city with winding disorderly streets. She is able to navigate it effectively by consulting a mental image she has of the city’s layout. This is a process that occurs reflectively and that she endorses as reliable. However, much of the information that she draws upon, she would be unable to put into words: she could not describe the features that she is aware of when visualising the city.

This illustrates what should be an uncontroversial point: we sometimes have conscious representations – for example mental images – whose content we are unable to fully articulate. These mental images are, moreover, evidence sensitive: Maya’s mental map is formed, and can be updated, as a result of her perceptions of Brussels. Moreover, her verbal behaviour is not as sensitive to this perceptual evidence, since she cannot articulate such a rich description of the city. It should be noted again, though, that in some respects linguistic behaviour will do better at responding to the evidence. For example, if Maya receives a bunch of linguistic evidence about the city’s layout in an Urban Studies course, she will be able to make assertions about the ratio of public space to private space, the percentage of green space etc. And she will not be able to alter her mental image of the city in a way that encodes this information. So again, conscious states and verbal behaviour are both partially sensitive to the evidence, and which fares better depends on the circumstances.17

4.3 Evidence and Non-Intellectual States

We can conclude that intellectual states are sensitive to evidence in an imperfect and complex way. There’s no straightforward method for stating in what ways such behaviour is evidence sensitive; one has to make reference to the type of evidence in question and the circumstances in which the behaviour is elicited. This means that the only plausible evidence based constraint on belief is the minimal E4.

This level of evidence sensitivity is also exhibited by non-intellectual states, however. Though such states are often not sensitive to linguistic testimony, they often are sensitive to perceptual evidence. Consider, for example, Bob the baseball player: his disposition to move to the appropriate location to catch the ball is sensitive to perceptual evidence regarding the ball’s trajectory through the air. Similar things could be said about much action in sport. More generally, we have dispositions to navigate environments correctly that we are not conscious of. For example, there are many buildings that I have visited just a few times and that I am able to find my way around when I return, but that I am unable to visualise. Moreover, in a familiar environment I will be able to reach for door handles, light switches and elevator buttons without looking, despite being unable to picture their location. These dispositions too are shaped by perceptual evidence.

A crucial point is that even the non-intellectual states of the subjects in the original cases may display this level of evidence sensitivity. Note that the examples as presented do not specify whether the non-intellectual states are evidence sensitive. However, there are plausible ways of filling in the details that make this the case. First take the trembling and sweating of the subject on the skywalk. This behaviour is clearly sensitive to perceptual evidence since it’s seeing through the glass bridge that causes it. Moreover, it might be that if the subject were to vigorously jump up and down, throw herself against the walls etc. and see that everything still held firm, her trembling would cease – sometimes this kind of visceral demonstration is an effective technique for getting over unnecessary fear. Second, Ben’s disposition to drive along the wrong road is sensitive to visual evidence – he’ll turn back when he sees signs for diversions around the bridge; moreover, he’ll lose the disposition (we may stipulate) once he’s made the mistake a few times. Delayed sensitivity to the evidence is enough to satisfy E4: we’ve seen that intellectual states may not be immediately sensitive to evidence either, as with Thomas the slow learner.

The case of implicit bias is more complicated since the non-intellectual behaviour in question is manifested in a vast range of situations and, moreover, there is much uncertainty over how it is formed and how it can be changed. There does seem to be evidence that a subject’s behaviour will become less biased if she is exposed to people who do not fit with the stereotype. For example, Dasgupta and Greenwald (2001) found that exposing subjects to admired black exemplars and disliked white exemplars, significantly reduced implicit bias behaviour. And Shook and Fazio (2008) found that white students who shared a dorm room with a black student, saw a reduction in implicit bias over the time they spent together. This suggests that even though testimony that black people are not inferior does not influence implicit bias behaviour, more direct forms of evidence may do – such as witnessing up close a black person who does not conform to the stereotype. This is comparable to the partial evidence sensitivity of Rich the meat eater’s linguistic dispositions.18

Together, these considerations show that the fact that beliefs are evidence sensitive does not speak in favour of intellectualism. In fact, an examination of the relationship between various belief-like states and evidence points in the opposite direction since it reveals important similarities between intellectual and non-intellectual states.

5. Inferential Integration

The second argument for intellectualism appeals to the connection between belief and inferential integration. It seems a plausible general principle that beliefs are inferentially integrated – in a sense to be spelled out below – and this might be thought to support intellectualism. The canonical presentation of this idea is given by Stich (1978), who says: ‘a person’s body of beliefs forms an elaborate and interconnected network with a vast number of potential inference patterns leading from every belief to almost any other.’19

He claims, moreover, that the states which have this property are exactly those states that we have ‘conscious access’ to – where the mark of conscious access to a state with the content p is the ability to say that p when asked, or to consciously judge that p when considering the question.20 More recently, this line of argument has been pursued by Neil Levy, specifically to argue that implicit biases are not beliefs. He puts it as follows:

Implicit attitudes are not beliefs. They do not feature often enough and broadly enough in the kinds of normatively respectable inferential transitions that characterize beliefs… they exhibit some  of the kind of inference aptness that characterize beliefs. They do so in a patchy and fragmented manner, which indicates they have propositional structure. They are patchy endorsements. [Levy (2014 a) p. 18]21

To be explicit, the intellectualist can be seen as making the following argument:

1. All beliefs are inferentially integrated.

2. Only intellectual states are inferentially integrated.

C. Only intellectual states are beliefs.22

There certainly seems something right about the idea that beliefs must be inferentially integrated (premise 1): intuitively, beliefs are the kinds of thing that can result from and produce inferences. Moreover, if behaviour results from an inferentially integrated network of states, so that a large body of information is brought to bear on it, this seems like a good reason to think of it as intelligent, rather than a reflexive response to a given stimuli.  And as was mentioned above: beliefs produce intelligent behaviour.23

Stich motivates the claim that only intellectual states are inferentially integrated (premise 2) by example. Suppose that we cognitively encode certain propositions about the syntax of our language – let p be such a proposition (e.g. one providing some constraint on anaphora binding). This would be a paradigm case of a non-intellectual state. Stich notes that a subject might, as a result of her research in linguistics, believe that if p holds then Chomsky is mistaken – she would assert such a thing in conversation – and she might also cognitively encode p, employing it in her processing of language. However, she would not be in a position to conclude that Chomsky is mistaken: if you asked her whether Chomsky was mistaken, she’d say she didn’t know, perhaps adding that she didn’t know whether p held.24 This shows that the subject’s representation of p is not inferentially integrated with her belief that if p then Chomsky is mistaken. On the other hand, if a subject was disposed to assert both ‘p’, and ‘if p then Chomsky is mistaken’ then she would presumably be able to conclude that Chomsky is mistaken. I think that Stich is right about this example but I don’t think all non-intellectual states are like cognitive encoding of linguistic rules.25

Before assessing the argument, though, a clarification is in order about how to think about inference.  Stich’s main example of it is modus ponens. This might suggest that inference is a relation that can hold only between entities with a sentential structure, since the most natural way to think of modus ponens is as a relation between sentences (you need a conditional, after all). However, it’s clear that our ordinary conception of inference doesn’t have this requirement – when it comes to imagistic reasoning for example. Suppose I have a Klee painting that I want to hang in my kitchen; I may visualise both the room’s layout and the painting in order to form a belief as to whether it is too big to fit between the fridge and the oven. This process seems a perfectly good example of an inference – I draw upon information I have about the painting and my kitchen to see whether a particular action is possible. Moreover, it’s an open question whether such processes involve analogue as well as sentential representations – the imagery debate is not yet resolved.26 We are happy to classify processes as inferences without presuming they involve only sentential entities. 

I think, therefore, that we should work with a minimal conception of inference which makes no assumptions about the types of representation it involves – sentential, analogue, or whatever. I do not propose to offer an analysis of the concept, since I think we have a decent intuitive grip on it. In what follows, I will work with this intuitive understanding.

Now returning to the argument, it’s important to distinguish two theses about inferential integration – strong and weak.

Strong integration: A state is a belief only if it is inferentially integrated with all other beliefs.

Weak integration: A state is a belief only if it is inferentially integrated with a sufficient number of other beliefs (but not necessarily all of them).

My strategy will be to argue that the strong integration thesis is false, and that the weak integration thesis, though plausible, does not favour intellectualism since non-intellectual states also satisfy the conditions it sets.27 To put it in the terminology of Levy, intellectual states are also patchy endorsements, so being a patchy endorsement does not preclude a state from being a belief.

Strong integration is too strong since not even paradigm cases of belief by the intellectualist’s lights meet it – some intellectual states cannot be integrated with others. First, it’s clear that the information manifested in linguistic behaviour cannot always be integrated with that represented in conscious mental images. Recall the example of Maya the navigator: she cannot verbally articulate the information represented in her mental map, and she cannot use the theoretical information she asserts to modify the map. In fact, experiments have shown that in certain cases, attempts to articulate the contents of visual memory – e.g., to describe a remembered face – actually degrade it.28

The intellectualist might respond by denying that mental images are manifestations of belief – that all belief must be manifested in linguistic behaviour. As well as leading to an unattractive picture of belief, this move fails to rescue strong integration. For example, sometimes whether we are able to provide a given piece of information verbally depends on what question we are asked. Suppose you are asked ‘is there a four letter English word ending E-N-Y?’ – you might well not be able to answer. However if you are asked ‘how do you spell deny?’ you will correctly answer ‘D-E-N-Y’. So you can access the information ‘the English word deny is spelt D-E-N-Y’. Moreover, you might well be able to give examples of words ending, A-N-Y, I-N-Y, and O-N-Y on demand (‘many’, ‘tiny’, ‘pony’). And presumably you could say that ‘puny’ is spelt P-U-N-Y. So you have information that together entails that for every vowel x, there is an English word that ends x-N-Y. However, (prior to reading this paragraph) you were not capable of putting this information together to draw this inference.29

In response to this, the defender of strong integration might appeal to cognitive architecture. My examples show that sometimes subjects are not, intuitively speaking, able to bring together in inference certain of their intellectual states. One could argue that this is not how the claim that ‘all beliefs can be inferentially integrated with all other beliefs’ is to be interpreted. Instead what might be relevant is some sort of in principle accessibility – that there are no barriers in virtue of cognitive architecture, only ‘performance limitations’.

I don’t think this reply is adequate. First, it’s not at all clear that all and only intellectual states are inferentially integrated in this sense. If the mind is ‘massively modular’, as is argued by Carruthers (2006), then intellectual states will themselves be encapsulated from each other. Thus, there would be no beliefs at all according to this version of strong integration.

Moreover, when we switch from an intuitive to a theoretical notion of inferential integration, premise 1 of the argument loses its intuitive plausibility. Though it might be the case that the nature of belief is determined by unexpected findings in empirical psychology, how the two are connected is tied up with wide ranging and controversial methodological questions. This is a matter I’ll discuss briefly in the final section, but for now I want to put aside strong integration and turn to weak integration. My contention is that many non-intellectual states are weakly integrated. One of the most compelling examples of this is, I think, behaviour in sport. Consider the following case:

Federer: In the 2006 Wimbledon final between Federer and Nadal, the following rally takes place. Federer hits three backhands down the line, causing Nadal to stay planted on that side – lulled into a false sense of security. He then hits two hard shots to the opposite side, taking advantage of Nadal’s flat-footedness and forcing him to scramble. This leads Nadal to play a weak short shot, allowing Federer to return at a very sharp angle so that he wins the point.30

In this example, Federer acts in an intelligent way, executing a complex and difficult plan to win the point. It is not habitual or reflexive behaviour, since it is tailored to Nadal’s specific abilities – against other players Federer could have tried to win the point sooner, but Nadal is exceptionally quick. It’s also open to modification depending on exactly what Nadal throws back, and when and how he gets wrong-footed. This strongly suggests it is the product of a network of inferentially integrated states since it is sensitive to information received from a variety of sources over an extended period of time – background knowledge of Nadal and of Federer’s own abilities, and perceptual information about what the ball is doing, the court conditions etc. It is not, though, the product of a conscious process of deliberation; it happens far too fast for that.31

There is also this level of inferential integration in the cases mentioned in previous sections. Bob the baseball player’s disposition to move to catch the ball may have been calibrated on the basis of a wide variety of information such as his visual perception of the ball, his sense of the wind strength and direction, and his kinaesthetic sense of the state of his own body (how fast he can run, dive etc.). Ben is not only disposed to drive on the old route to work, but will take his dry cleaning even though he only passes the dry cleaners on the old route; and if he fancies a bagel, he will change the way he is driving to go past the deli etc. Recall also the example of a building I have visited a few times and can remember my way around as I go – though I can’t articulate or visualize its layout. When I enter it, I will alter my behaviour depending on what I aim to achieve there, and also if I get new perceptual evidence that the layout has changed. With implicit bias the characteristic behaviour manifests itself in a variety of ways – body language, speech, workplace decisions etc. It also appears to draw on background information in its activation – hiring decisions are unconsciously influenced by the racial connotations of the name on the CV and so would appear to be mediated by background beliefs about which names are typical of which races.32

I conclude that the appeal to inferential integration to establish intellectualism fails.33 In the next section I’ll briefly look at where this leaves us when it comes to understanding belief.

6. The Way Forward

We have seen that intellectual states cannot be singled out as states of particular interest by considerations of sophistication. Indeed, intellectual and non-intellectual states form a unified kind in virtue of both being imperfectly evidence sensitive and inferentially integrated. I think this makes a more liberal account of belief attractive: one on which all sufficiently sophisticated states with the appropriate action guiding role count as beliefs.

It is of course open to the intellectualist to deny such a claim, but I think at this point the burden of proof is on them to give an argument for restricting what qualifies as a belief. There are, moreover, methodological grounds for resisting such a move. By ensuring a tight link between belief and intelligent behaviour, the action-based account picks out a practically significant category. If states are evidence sensitive then we are able to work out when an agent is in them on the basis of her environmental setting – I know you believe that it’s raining because you are sat in front of a window and can see the rain coming down. Moreover, this feature allows us to influence such states by presenting new evidence. If states are inferentially integrated, they will influence behaviour in a systematic way over a range of situations. Thus, knowing when subjects possess such states allows us to predict, explain and influence their behaviour in a systematic manner. This has been held to be a central feature of belief (and belief attribution) by philosophers as otherwise opposed as Dennett (1971) and Fodor (1987).

Thus an action-based account of belief accords with the central role of belief ascription, while an intellectulist account hampers it. I think this gives us good reason to prefer the action account.

References

Anderson, C. A. (2007). Belief perseverance. Encyclopedia of Social Psychology. Thousand Oaks, CA: Sage, 109-110.

Brownstein, M., & Madva, A. (2012). Ethical automaticity. Philosophy of the Social Sciences, 42(1), 68-98.

Carruthers, P. (2006). The architecture of the mind. Oxford University Press.

Dasgupta, N., & Greenwald, A. G. (2001). On the malleability of automatic attitudes. Journal of personality and social psychology81(5), 800.

Davidson, D. (1975). Thought and talk. Mind and language, 7-23.

Davies, M. (1989). Tacit knowledge and subdoxastic states.

Dennett, D. C. (1971). Intentional systems. The Journal of Philosophy, 68(4), 87-106.

Fodor, J. A. (1983). The modularity of mind: An essay on faculty psychology. MIT press.

Fodor, J. A. (1987). Psychosemantics. The MIT Press.

Frankish, K. (2004). Mind and supermind. Cambridge University Press.

Gendler, T. S. (2008a) Alief and belief. Journal of Philosophy, 105(10), 634.

Gendler, T. S. (2008b). Alief in action (and reaction). Mind & Language, 23(5), 552-585.

Gendler (2012) Intuition, Imagination, and Philosophical Methodology: Summary. Analysis 72 (4): 759-764

Gertler, B. (2011). Self-knowledge and the transparency of belief. Self-Knowledge. Oxford: Oxford.

Halberstadt, J. B., & Levine, G. M. (1999). Effects of Reasons Analysis on the Accuracy of Predicting Basketball Games1. Journal of Applied Social Psychology, 29(3), 517-530.

Kosslyn, S. M. (1996). Image and brain: The resolution of the imagery debate. The MIT Press.

Levy, N. (2014a). Neither Fish nor Fowl: Implicit Attitudes as Patchy Endorsements. Noûs.

Levy, N. (2014b). Consciousness, implicit attitudes and moral responsibility. Noûs48(1), 21-40.

Mandelbaum, E. (2012). Against alief. Philosophical Studies, 1-15.

Powers, L. H. (1978). Knowledge by deduction. The Philosophical Review, 87(3), 337-371.

Pylyshyn, Z. W. (1981). The imagery debate: Analogue media versus tacit knowledge. Psychological review88(1), 16-45.

Ryle, G. (1949). The concept of mind. Routledge.

Schooler, J. W., & Engstler-Schooler, T. Y. (1990). Verbal overshadowing of visual memories: Some things are better left unsaid. Cognitive psychology,22(1), 36-71.

Schwitzgebel, E. (2010). Acting contrary to our professed beliefs or the gulf between occurrent judgment and dispositional belief. Pacific Philosophical Quarterly, 91(4), 531-553.

Shoemaker, S. (2009). Self-intimation and second order belief. Erkenntnis,71(1), 35-51.

Shook, N. J., & Fazio, R. H. (2008). Interracial Roommate Relationships An Experimental Field Test of the Contact Hypothesis. Psychological Science,19(7), 717-723.

Smithies, D. (2012). The Mental Lives of Zombies. Philosophical Perspectives, 26(1), 343-372.

Stalnaker, R. (1984). Inquiry.

Stalnaker, R. (1991). The problem of logical omniscience, I. Synthese, 89(3), 425-440.

Stich, S. P. (1978). Beliefs and subdoxastic states. Philosophy of Science, 499-518. In his (2011) Collected Papers: Volume 1 (page references to the latter).

Tye, M. (2000). The imagery debate. Mit Press.

Wallace, D. F. (2006). Federer as religious experience. New York Times20.

Williams, B. (1973). Deciding to believe. Problems of the Self, 136(151), 401-419.

Williamson, T. (2007). The philosophy of philosophy. John Wiley & Sons.

Wilson, T. D., Dunn, D. S., Bybee, J. A., Hyman, D. B., & Rotondo, J. A. (1984). Effects of analyzing reasons on attitude–behavior consistency. Journal of Personality and Social Psychology47(1), 5.

Zimmerman, A. (2007). The nature of belief. Journal of Consciousness Studies, 14(11), 61-82.

 

 


Notes

  1. This example comes from Gendler (2008a).
  2. See, e.g., Gendler (2008a), Schwitzgebel (2010), Gertler (2011) for discussion.
  3. This example comes from Schwitzgebel (2010). Gendler’s (2008a) case of the lost wallet, and Zimmerman’s (2007) case of Hope and the Dustbin are similar in form.
  4. I’m assuming that assertion is itself a type of action.
  5. My arguments apply equally to the broad and narrow version of the thesis so I will not dwell on which is the more plausible.
  6. There are many ways one could make such a denial. For example: Shoemaker (2009) claims that both intellectual and non-intellectual states are beliefs, so that the subjects in the examples above have contradictory beliefs; Schwitzgebel (2010) argues that intellectual and non-intellectual states are both parts of a single complex belief state, so that subjects with conflicting behavior are in a state of ‘in between belief’; Gertler (2009) claims that the non-intellectual states alone are beliefs.
  7. Gendler claims that the non-intellectual states I’m considering along with many others form a distinctive mental kind she calls ‘alief’. It should be noted that my arguments do not entail that all cases Gendler classifies as alief are since many of them – such as priming effects – lack the appropriate degree of sophistication or action guiding role. Thus I leave room for a less expansive notion of alief.
  8. See also Davidson (1975), Brownstein & Madva (2012) and Zimmerman (2009) for similar ideas.
  9. There have been other attempts to capture the complexity of belief similar in spirit to the ones I’m considering – for example, Davies’ (1989) appeal to the generality constraint. I think the objections I raise will carry over to alternative formulations. See note 30 below for further discussion.
  10. Gendler (2008b) pp. 565-566. In a précis of her work on belief (2012, p 763) she says: ‘beliefs are, roughly speaking, evidentially sensitive commitments to content that are quickly revisable in the face of novel information’ – so this appears to be her considered position. See also Brownstein and Madva (2012) p71 who make a similar argument concerning cases of implicit bias.
  11. Gendler (2008b) 565-566. She repeats this claim in her (2012) – see note 11.
  12. This line of thought is endorsed by Schwitzgebel (2010).
  13. This example comes from Stalnaker (1991)
  14. Variations on this example are possible where what decides whether Sam is sensitive to linguistic evidence is not his social relation to the speaker but whether the speaker talks in his teenage vernacular, is sufficiently rhetorically adept, charismatic etc. This further illustrates the messiness of our relation to linguistic evidence.
  15. One might object that it could be that Sam’s intellectual states are sensitive to the testimony of his parents; it’s just that he has a standing belief that they are unreliable testifiers, which defeats such evidence. However, we can imagine that Sam is disposed to sincerely assert that his parents are reliable sources of information, and yet ignore their advice regardless – and the intellectualist is committed to saying that he believe what he says. I’m sure this is in fact the case with implicitly biased subjects – they say, e.g., that the testimony of black people is equally reliable and yet they completely ignore it.
  16. See Anderson (2007).
  17. As variant on this case we can consider auditory rather than visual imagination. For example, a jazz musician without formal training might be able to imagine how various motifs would sound, and which would work in a given circumstance – this might guide her playing though she would be unable to verbally articulate what she was imagining. Moreover, the imaginative abilities might be updated as she heard new performances.
  18. Schwitzgebel (2010) pp. 539-541 makes a similar point
    19. Stich (1978) pp. 42-43.
    20.  ibid pp. 40-41.
  19. See also Levy (2014b)
  20. I think that Stich himself is working with a slightly different dialectic. He takes it to be intuitively obvious that all beliefs are intellectual states (mistakenly, in my view) and wants to argue that inferential integration is a property distinctive of them – and thus that our concept of belief is a category of theoretical interest.
  21. A related argument is Fodor’s (1983) influential claim that beliefs are states within the central system – as opposed to informationally encapsulated modules. I stick with Stich’s formulation since it requires less technical machinery.
  22. See ibid p 44
  23. Levy appeals to empirical data to argue that implicit biases are not inferentially integrated.
  24. See, e.g., Tye (2000).
  25. So in the argument above, if we work with strong integration then premise 1 is false, while if we work with weak integration premise 2 is false – either way the argument is unsound.
  26. See Schooler and Engstler-Schooler (1990).
  27. The original trick regarding ‘deny’ is found in Powers (1978).
  28. This is a paraphrase of the commentary given by David Foster Wallace (2006).
  29. One might argue that Federer possesses ‘mere practical knowledge’. However, the nature of practical knowledge – in particular whether it involves beliefs – is very much an open question. Indeed I take it to turn on the kinds of issues being discussed in this paper. Therefore, it would be begging the question to assume practical knowledge is not belief.
  30. See Mandelbaum (2012) for further argument that implicit bias must involve inferential reasoning.
  31. As I mentioned above, Davies (1989) – among others – has argued that one better captures the sophistication of belief by appeal to the generality constraint, rather than inferential integration. I think if one were to try to run the argument in these terms, my objection would still apply: intellectual states only satisfy the generality constraint partially and non-intellectual states do so too. Spelling out this argument, however, would requiring setting everything up in terms of the contentious theory of concepts the generality constraint presupposes, so I won’t pursue the matter here.

20 thoughts on “Against Intellectualist Theories of Belief”

  1. For a Dual Theory of Belief

    Sometimes our actions belie our words and judgements. We consciously judge and sincerely assert that something is the case but act as if it were not. We fail to take it to heart or to live up to it, or it may just slip our minds. In such cases, Marley-Payne claims, we have two different belief-like states: an intellectual state, which guides our verbal behaviour and conscious judgements, and a conflicting non-intellectual state, which guides the rest of our behaviour. Intellectualism, as Marley-Payne calls it, is the view that only the former states are beliefs, since only they exhibit the required sophistication, spelled out in terms of evidence sensitivity and inferential integration. Marley-Payne spends the bulk of his paper attacking intellectualism, distinguishing strong and weak criteria for evidence sensitivity and inferential integration, and arguing that intellectual states often fail to meet the strong criterion and that non-intellectual states often succeed in meeting the weak one. Instead of intellectualism, he proposes a unified account of belief, on which all sufficiently sophisticated states with an appropriate action-guiding role count as beliefs. This conception, Marley-Payne argues, picks out a state of practical significance, which we can ascribe on environmental evidence and which enables us to predict, explain, and influence each other’s behaviour.

    My overall response to Marley-Payne’s paper was positive. He makes a strong case against the superior sophistication of intellectual states, highlighting the imperfect and complex ways in which belief-like states are evidentially sensitive and inferentially integrated. (I particularly applaud his discussion of the way in which conscious belief may involve imagistic representations.) Moreover, I agree that an action-based account of belief is both coherent and useful. There are, however, several points I’d like to make, in a spirit of constructive revision and sympathetic extension.

    First, I don’t think it is useful to see cases of conflicting behaviour as involving a conflict between an intellectual state and a non-intellectual state, defined in terms of the type of behaviour they guide. Consider, for example, Schwitzgebel’s Juliet, the implicit racist (Schwitzgebel, 2010, p.532). Juliet is a liberal academic who sincerely asserts that there are no racial differences in intelligence. She has studied the scientific literature on the topic, is convinced by the case for equality, and argues strongly for it. Yet her unreflective behaviour and judgements of individuals display systematic racial bias. She cannot help thinking that black students look less bright than white ones, feels surprise when a black student does well, and consistently gives black students lower grades. It appears that Juliet has two conflicting action-guiding mental states, an unbiased one and a biased one. However, the former is not an intellectual state in Marley-Payne’s sense, nor is the latter a non-intellectual one. The unbiased state may affect more than her assertions and conscious judgements. Recalling the view she has formed about equality of intelligence, Juliet may decide that it requires certain actions on her part, and go on to perform them. She may, for example, campaign for the introduction of institutional policies designed to ensure fair grading of all students and may willingly follow these policies herself. Conversely, the biased state may affect her assertions and conscious judgements. Of course, Juliet will not assert or consciously judge that there are racial differences in intelligence; if she entertains that idea she will reject it. But she may make unguarded comments or form spontaneous judgements that are shaped by the biased state — remarking that such-and-such a student isn’t very bright or thinking that another didn’t perform well in class.

    The real difference between the states lies, I suggest, not in the kind of behaviour they guide, but in the way they guide it. The unbiased state tends (in the right circumstances) to become active as a conscious occurrent thought, and it is by doing so that it guides her reasoning and behaviour. It is only when Juliet recalls her view about the equality of intelligence that she tries to shape her thoughts and actions to fit it. When she does not recall it, her thoughts and actions are guided by her biased state, which operates by default, without conscious activation. We might call the unbiased state a reflective state and the biased one a non-reflective state. This analysis extends easily to other cases of conflicting behaviour, such as absentmindedness.

    It is better, then, to see conflicting behaviour as involving conflict between a reflective state and a non-reflective one. (I assume that reflective states need not always be propositional in character; a reflective state might be encoded in a visual image, as in the case of Marley-Payne’s Maya the navigator. Indeed, there is a strong case for the view that conscious thinking always involves sensory imagery of some kind; see e.g., Carruthers, 2011; Prinz, 2011). We might also speak of two kinds of behaviour, reflective and non-reflective, the former being the product of conscious thought, the latter produced without conscious thought. The same observable behaviour might be reflective on one occasion and non-reflective on another.

    Does this pose a problem for Marley-Payne’s argument? Is there a case for restricting the term ‘belief’ to reflective states only (‘reflectivism’)? I don’t think so. Marley-Payne’s points transfer smoothly to reflective and non-reflective states. His examples of intellectual states (Bob the baseball player, Sam the teenager, Rich the carnivore, Maya the navigator) serve equally well as examples of reflective states, and the same points could be made about their imperfect evidence sensitivity and inferential integration. (An exception is the case of Malcolm the snap judger, whose spontaneous assertions differ from his reflective judgements. Marley-Payne presents this as a conflict between two kinds of intellectual behaviour, but it is better described as one between non-reflective and reflective behaviour — Malcolm’s snap judgements being guided by non-reflective states and his reflective judgements by reflective ones. However, this doesn’t weaken the case against reflectivism. If anything, it strengthens it, since the non-reflective states involved are more sensitive to evidence than the reflective ones.) Similarly, Marley-Payne’s examples of non-intellectual states, such as those involved in action-planning in sport, are also examples of non-reflective states, and the same points hold about their sophistication.

    If both reflective states and non-reflective states can count as beliefs, how should we describe cases of behavioural conflict? Consider Juliet again. Both her reflective and non-reflective states are weakly integrated, interacting with background beliefs to produce intelligent behaviour across a range of contexts. And both (we may suppose) are weakly evidence sensitive, the reflective one being sensitive to linguistic evidence of various kinds, and the non-reflective one to personal impressions of black people (or to distorted presentations of them in the media). So what does Juliet believe? Should we select the state that is more sophisticated and say that that is her belief? This is not an attractive option. Sophistication, as Marley-Payne shows, is a complex matter, and judgements of relative sophistication will often be hard to make. Moreover, if we were to focus on the effects of either state in isolation, we would, I suggest, have no hesitation in describing them as manifestations of belief. It is more plausible, then, to regard both states as beliefs.

    This raises a problem, however. For it means that Juliet has flatly contradictory beliefs about the existence of racial differences in intelligence, and how can such beliefs survive for any time within the same inferential system? Beliefs may not need to be perfectly integrated, but shouldn’t they be well enough integrated for the system to detect and eliminate explicit violations of the law of non-contradiction? An obvious option here would be to say that the two beliefs belong to separate, relatively insulated inferential systems, and that integration is relative to each system. Thus, a reflective belief need be integrated only with other reflective beliefs, and a non-reflective belief only with other non-reflective ones. This is not an implausible view. There is large body of work in cognitive and social psychology devoted to the idea that humans have two separate reasoning systems, or at least two types of reasoning process, with different functional characteristics (e.g., Evans, 2010; Stanovich, 2004; for a survey, see Frankish, 2010).

    I think this is the right approach. However, it threatens to undermine the unified account of belief that Marley-Payne proposes. For the differences between the two kinds of belief may be as significant as the similarities. Even though both kinds are imperfectly evidence sensitive, reflective belief is typically much more sensitive to linguistic evidence than non-reflective belief is, and it is typically susceptible to more rapid revision. Moreover, applying reflective belief in conscious reasoning and decision making is an effortful conscious process, requiring working memory resources, whereas non-reflective belief processes are non-conscious and effortless. From a practical point of view, these differences may be very significant. If one wishes to modify a person’s behaviour, then it is important to know whether it stems from a reflective belief or a non-reflective one, since different strategies will be more likely to succeed in each case. Rational argument, for example, is less likely to change a non-reflective belief than a reflective one. Similarly, if one wants to encourage a person with conflicting beliefs to act upon one rather than the other, the best strategy will depend on whether the preferred belief is reflective or non-reflective. If the former, then one should tell them to reflect before acting; if the latter, then it would be better to advise them to trust their instincts. These are only some of the more obvious differences between the two kinds of belief. A deeper understanding of the psychology of the two states will doubtless reveal more. (For my own account of reflective belief, see my 2004, 2009, and for discussion of the account’s implications for the control of implicit bias, see my forthcoming.)

    In attacking intellectualist theories of belief, Marley-Payne performs a valuable service. Philosophers often employ an idealized conception of belief, modelled on reflective belief. By showing that the notion of belief has a wider, less rigid application, which is more tightly connected to action, Marley-Payne provides a useful corrective. But we should not let this blind us to the existence of important distinctions within the class of beliefs. There are different types of belief with very different functional profiles, and recognizing this is crucial for understanding, predicting, and influencing human behaviour.

    References

    Carruthers, P. (2011). The Opacity of Mind: An Integrative Theory of Self-Knowledge. New York: Oxford University Press.

    Evans, J. St. B. T. (2010). Thinking Twice: Two Minds in One Brain. Oxford: Oxford University Press.

    Frankish, K. (2004). Mind and Supermind. Cambridge: Cambridge University Press.

    Frankish, K. (2009). Systems and levels: dual-system theories and the personal-subpersonal distinction. In J. St. B. T. Evans and K. Frankish (eds.), In Two Minds: Dual Processes and Beyond (pp. 89-107). Oxford: Oxford University Press.

    Frankish, K. (2010). Dual-process and dual-system theories of reasoning. Philosophy Compass, 5(10), 914-26.

    Frankish, K. (forthcoming). Playing double: implicit bias, dual levels, and self-control. In M. Brownstein and J. Saul (eds.), Implicit Bias and Philosophy Volume I: Metaphysics and Epistemology. Oxford: Oxford University Press.

    Prinz, J. (2011). The sensory basis of cognitive phenomenology. In T. Bayne and M. Montague (eds.), Cognitive Phenomenology (pp. 174-96). New York: Oxford University Press.

    Schwitzgebel, E. (2010). Acting contrary to our professed beliefs, or the gulf between occurrent judgment and dispositional belief. Pacific Philosophical Quarterly, 91(4), 531-53.

    Stanovich, K. E. (2004). The Robot’s Rebellion: Finding Meaning in the Age of Darwin. Chicago: University of Chicago Press.


    1. Keith provides an incisive summary of the key points of my paper and suggests some problems that remain unanswered. As I understand him, the general worry is that the transition from my negative arguments to the positive view I hint at is not as smooth as I suggest. In this he is completely right. Defending my preferred account of belief is a large task, in no way completed here. It’s one I attempt over the course of two companion papers ‘Task-Indexed Belief’ and ‘Pragmatism about Knowledge’ – links to both are on my website (apologies for the shameless plug).

      Keith presents some specific reasons to think my argument will not go through, and these I want to resist, as I’ll discuss below. I take him to have three main points, which are as follows:

      When defining intellectualism, it’s better to do so in terms of reflectively endorsed states than states producing intellectual behavior.
      If we accept a broader notion of belief, as I argue, we are committed to a wide range of subjects who possess contradictory beliefs they are unable to correct – this requires explanation.
      The best explanation of the phenomena comes from a dual theory of belief.

      Though I’m in agreement with the first two points, I don’t agree with the third – I don’t think the dual theory is satisfactory, and my preferred view is a unified but indexed account of belief.

      Keith’s first point is well taken. Indeed I’m sympathetic to idea that there is no way of drawing the intellectual/anti-intellectual distinction which stands up to scrutiny – I think this is all to the good for me. I went with the easier to characterize approach to start off with, I hope it allowed us to get to a discussion of the central examples without too much theoretical preamble. I had alternative characterizations of intellectualism, like the one Frankish advocates, in mind in section 4.2. I take it to be part of my argument that the same problems apply to this view – as he notes.

      The second point is somewhat complex. Frankish suggests as a plausible psychological principle that a subject should generally be able to resolve contradictions in belief – however, this highly controversial. One reason is that many people take Frege cases to be examples of contradictory belief – for example, Lois Lane believes that Superman can fly and Clark Kent can’t fly, and this is flat out believing a contradiction.

      In spite of this, I do think an adequate account must say something about contradictory beliefs. There seem important cognitive differences between, say, the implicitly biased subject and a person making a simple logical error. Our folk psychological account should make sense of this. (This is the problem I aim to address in my paper ‘Task indexed belief’.)

      Frankish argues that the solution comes from a dual theory of belief. Roughly, we have two kinds of beliefs (system-1 beliefs and system-2 beliefs) that form two encapsulated inferential systems. One can have unresolvable contradictions across systems but not within them. I am not convinced, however. I agree that our beliefs cluster into (more-or-less) inferentially encapsulated compartments, but I think there are more than two clusters. The examples discussed in the paper suggest that there are many dimensions of conflict rather than two: there is conflict between verbal endorsement and conscious imagery, between reflective endorsement and snap judgement, between sincere assertion and unconscious action (among others), and none of these conflicts can be resolved in the manner one resolves a typical logical inconsistency. I suggest that instead of a dual theory of belief, we adopt an indexed theory of belief. On this view belief is a three-place relation that holds between a subject, a proposition and a task – so a subject can believe a proposition relative to one task but not another. These tasks pick out the range of behavior a belief state guides. Since we can have as many tasks as we need, this view is more flexible than Frankish’s dual-belief theory.

      An interesting further question is whether two-system views in psychology give independent reason to adopt dual theory of belief (if they don’t provide a solution to contradictory beliefs). There’s a great deal of psychological discovery in this area which has significant implications in a range of areas of philosophy. I’m not convinced it motivates a dual theory of belief however.

      Obviously there is a huge amount of empirical literature here, and saying anything comprehensive is a big project. I’ll give my general worry with views like Frankish’s though. The central feature of two-system theory, as I understand it, is a distinction between two types of mental process rather than two kinds of mental state. The crucial distinction is between mental processes that involve ego depletion (system-2 processes) and those that don’t (system-1 processes). Processes that involve ego depletion include effortful conscious reasoning and resisting temptation. I think it’s a hugely important discovery that we have a limited capacity to perform such processes – that performing any one inhibits our ability to perform any of the others (without a recovery period).

      One might think we can move from two kinds of process to two kinds of states – where we have system-1 beliefs that involve system-1 processes and system-2 beliefs that involve system-2 processes. However, I’m inclined to resist this move. Indeed Kahneman, one of the architects of the two-system view, is explicit in Thinking Fast and Slow that talk of two systems is only a metaphor. He suggests that it useful heuristic to think of our mind as containing two homunculi with independent beliefs and desires, but that is not a commitment of his view.

      I think that this heuristic, though useful in some contexts, is misleading in others. In particular, it obscures how interconnected the two types of process are. In particular, in deliberation, both system-1 and system-2 processes are intimately involved in the end result. Effortful reasoning can only be done on the items in working memory, and it is system-1 processes that determine what gets recalled from long-term memory. There is continuous interplay as deliberation progresses, with the result of an effortful inference determining what is likely to be recalled next. The interdependence of the two kinds of process suggests to me that we won’t be able to use them as means to distinguish two kinds of action guiding state (which I take beliefs to be). I find it plausible that most actions will be influenced in part by both types of process.

      This is a fascinating issue that I certainly haven’t got the bottom of, and I’m very grateful to Keith for encouraging me to think more carefully about it. I’d appreciate hearing any further thoughts he has.

      1. Hi Jack,

        Thanks for your reply. I confess that I’ve not read your other two papers, but I will do and may discuss them on my blog. Here I’ll just make some brief points before comments close.

        Dual or indexed? A dual theory of belief can allow for distinctions within each type. The claim is merely that there is a fundamental binary division. In fact, I think that acceptance (a generic reflective attitude of which reflective belief is a subclass) can be restricted to particular deliberative contexts, and thus be, in that sense, task-indexed. I’d argue, however, that belief proper is not restricted in this way (see chs. 4-5 of my 2004). But I need to read your paper on task-indexed belief!

        Do dual systems require dual beliefs? This is an important and relatively neglected question. I doubt there’s a general answer. There are different versions of DS theory, with different assumptions and commitments. But I suspect that most versions will require some form of dual-state theory. (In social psychology, DS theory is often associated with dual-attitude theories, but attitudes in that sense are different from beliefs.)

        Is there interplay between the systems? Unlike Kahneman, I don’t see dual-systems talk as metaphorical but as having structural commitments. (I think most DS theorists would agree — at least in cognitive psychology.) However, this doesn’t require that the two systems are causally isolated. In fact, I think system 2 is causally dependent on system 1 in multiple ways. System 1 processes trigger system 2 processes, supply content to them, motivate them (on my view system 2 processes are intentional actions), and make them effective in action. However, this causal dependency needn’t undermine the explanatory distinctness of the two systems. We can think of system 2 as a functionally distinct system that is partially realized in system 1 processes. This means that some human actions will have dual intentional explanations — one terms of reflective states and another in terms of non-reflective ones — but I would argue that this is neither unacceptable nor (when properly spelled out) especially counterintuitive. (For more on this, see my 2012 paper in Mind & Society)

        That will have to do for now. Thanks for a stimulating paper and discussion!

      2. Thanks – that’s really helpful information for further research, apart from anything else. I’ll just briefly state how I see things – since time is running out! But this isn’t at all meant to be a counterargument.

        My general thought is that even if we grant that two-system theory provides a theoretically significant distinction between two kinds of belief states, it is not necessarily one that we want to import into our folk psychology (so that there really are two kinds of belief). There are a great deal of psychologically significant distinctions that cross cut and subdivide our folk distinctions, but we can’t adopt all of them or our folk discourse would lose its convenient simplicity. (The force of this point obviously gets us into fairly deep methodological territory…)

        I must admit that I’ve mostly looked at the social psychology perspective on two-systems rather than cognitive science. I’ll have to read up on the papers you mention to see what I think about them.

        A final point is that I take belief to be a standing state with a characteristic functional role – I’m not a fan of ‘occurrent beliefs’. So the two systems have to give us distinct functional states with appropriate connections to action and environment. I worry that this can’t happen if the two kinds of process are too intertwined. Perhaps you think this challenge can be met though.

        I’ll leave it here but thanks again for a great exchange.

  2. How boring it is for philosophers to agree!  I agree with almost everything in Jack Marley-Payne’s nicely argued essay.  I’ll try not to be too boring.

    Sometimes we sincerely say or silently think one thing but lots of our other reactions and behavior seem more consistent with the contrary opinion.  Example: Someone sincerely says or silently thinks that all the races are intellectually equal; but in most of her reactions, when confronted with real people in daily life, she shows substantial racial bias in assessing intelligence, e.g. feeling surprised when a black person says something smart but when not when an otherwise similar white person does.  Another example: Someone sincerely says or silently thinks that his dear dead friend has gone to Heaven; but his emotional reactions don’t really fit with that.  On intellectualist views of belief, what we really believe is the thing we sincerely say or silently think, despite the other reactions.

    Jack raises some problems for intellectualism.  I’ll get to those in a bit.  I agree they are problems, and I agree with his final anti-intellectualist view.

    Intellectualism might be defended on four grounds.

    (1.) Intellectualism might be intuitive.  Maybe the most intuitively plausible think to say about cases of sincerely denied racism is that the person really believes that all the races are intellectually equal, but she has trouble putting that belief into action.  Similarly, the man really believes that his friend is in Heaven, but it’s hard for him to avoid reacting emotionally as if his friend is ineradicably dead rather than just “departed”.

    (2.) Intellectualism might fit well with our theoretical conceptualization of belief.  For example, maybe it’s in the nature of belief that beliefs are responsive to evidence and deployed in reasoning in a manner that suggests that they are present when, and only when, they are intellectually affirmed.  The implicit racist’s intellectual attitude is reasons-responsive in a belief-like way, while her spontaneous negative assessments of certain ethnic minorities are not.  Relatedly, but somewhat differently, it might be partly definitional of belief that beliefs are easily knowable through introspection, and this too might favor intellectualism, since the implicit racist better knows what she is disposed to say about the races than she knows her pattern of spontaneous reactions when confronting with people of different race.

    (3.) Intellectualism might cohere nicely with the idea of belief as generally used in philosophy.  Epistemologists widely regard knowledge as a type of belief.  Philosophers of action commonly think of beliefs as coupling with desires to form intentions.  Philosophers of language have created a large literature on “belief reports” like “Lois Lane believes that Superman is strong.”  If we want philosophy to be a unified discipline, we ought to prefer an approach to belief that fits with its use in epistemology, philosophy of action, and philosophy of language.  And maybe the intellectualist approach fits best.

    (4.) Intellectualism might be the best practical choice for the use of the term “belief” because of its effects on people’s self-understanding.  For example, it might be more effective, in reducing unjustified racism, to approach an implicit racist by saying “I know you believe that all the races are intellectually equal, but here, look at these spontaneous responses you also have” than to say “I know you are sincere when you say that all the races are intellectually equal, but it appears that you don’t through-and-through believe it”?  Tamar Gendler, Aaron Zimmerman, and Karen Jones have all plausibly suggested that something like the former approach is more promising in opening people toward personal change.

    Marley-Payne’s essay focuses on the second of these four reasons.  Gendler and Zimmerman especially have argued that our intellectual responses are responsive to evidence in a way that our emotional and habitual responses are not.  According to Gendler and Zimmerman, this is good theoretical reason to embrace an intellectualist approach to belief.  Gendler calls the pattern of spontaneous and emotional responding, which tends not to be immediately responsive to evidence, alief rather than belief.  Marley-Payne presents a nice range of evidence suggesting that the intellectual sides of our minds are often not as reasons-responsive as the Gendler-Zimmerman view appears to suggest and also that our non-intellectual spontaneous responses are often more reasons-responsive that their view suggests.

    I agree with Marley-Payne about this.  I don’t think he quite closes the case, since there’s plausibly still room for defenders of the Gendler-Zimmerman view to argue that that the intellectual side of our mind is substantially more responsive to reasons and evidence than the less intellectual side.  But I think the ball is in their court now: They owe it to us to articulate more precisely in what way this is supposed to play out and to deal with Marley-Payne’s cases.  To Marley-Payne’s evidence I would also add the work of Jonathan Haidt on people’s moral rationalizations and the large psychological literature on motivated reasoning.  If, as these literatures suggest, our intellectual patterns of reasoning are quite often (mostly?) post-hoc defenses of judgments arrived at spontaneously from a complex mix of causes, many of which have little to do with evidence, then those intellectual judgments are not going to be especially responsive to reasons, despite the human tendency to construct lawyerly defenses that make them seem as though they are responsive.

    How about some of the other defenses of intellectualism?

    Let’s set aside the issue of whether it’s theoretically definitional of belief that beliefs are easily introspectible, which is an alternative version of the theoretical argument for intellectualism.  How easily introspectible beliefs are is an issue that should fall out of an account of belief rather than drive an account.  If intellectualism is true, beliefs are probably easily introspectible.  If intellectualism is false and we often fail to believe, or fail to fully believe, what we sincerely endorse, then introspectibility is not so straightforward.  But let’s not decide in favor of intellectualism just to save claims of easy introspectibility.

    This leaves us with defenses 1, 3, and 4: intuitiveness, coherence with uses of “belief” in other areas of philosophy, and the practical effects of the linguistic decision on people’s self-understanding.  All of these defenses have some merit, I think.

    On intuitiveness: Intuitively, what do people say about cases like the implicit racist and the mourning friend?  I don’t think the intuitions are entirely univocal.  Sometimes we seem to want to say that the people in question believe whatever it is that they intellectually endorse.  It’s natural to say that the racist really does believe in intellectual equality of the races, that the mourning friend really does believe her friend is in Heaven.

    On the other hand, sometimes it seems natural or intuitive to say that the implicit racist doesn’t really, or fully, or deep-down believe that the races are equal, and that the mourner maybe has more doubt about Heaven than she is willing to admit to himself.  I’m inclined to think it’s approximately a tie, intuitively, between straight-up intellectualism and the more nuanced view that Marley-Payne and I would favor on which intellectual endorsement isn’t sufficient for belief.

    On coherence with other areas of philosophy: Discussion of knowledge as justified true belief and discussion of “belief reports” in philosophy of language do seem to work from a background assumption of something like intellectualism.  Mainstream epistemologists are interested in cases of whether what one intellectually believes is knowledge, if it is also true and justified.  Philosophers of language tend to take sincere assertion as criterial for belief.  So if the aim is coherence with these areas of epistemology and philosophy of langauge, the intellectualist view seems to fit neatly.

    However, maybe coherence with intellectualist views of belief in epistemology and philosophy of language is a mistaken ideal and not in the best interests of the discipline as a whole.  For example, for example, some of the puzzles in philosophy of language around the use of names might seem less puzzling if we rejected the kind of intellectualism on which assertion is criterial for belief and allowed that maybe that “Superman is strong” is something that Lois Lane only kind-of believes, or toward which she has a messed-up in-betweenish attitude, despite her sincere assertion.

    Also it’s not clear that intellectualism is favored if we consider coherence with philosophy of action.  If we think not just of intellectual plans – “I will do A!” said to oneself – and instead of enacted choices, we maybe create some troubles for intellectualism.  Why did seeing the name “LeShaun Jackson” on the top of the resume cause the implicit racist to set aside the application?  Well maybe because she wants to hire someone smart for the job and she believes that someone with that name will belong to a race of people who are not smart.  I’m inclined to think that’s not quite the right thing to say, partly because I think the belief-desire model of action explanation is too simplistic, but if one wants to accept the belief-desire model of action explanation that is commonly seen in philosophy of action, such cases seem to fit better with anti-intellectualist views of belief than with intellectualist views.

    On practical effects on people’s self-understanding: I don’t doubt that Gendler, Zimmerman, and Jones are right that many people’s immediate reaction to being told that they don’t really have all the handsome-sounding egalitarian beliefs that they think they have, and all the handsome-sounding religious beliefs that they think they have, will be to get defensive.  People don’t want to be told that they don’t really believe the self-flattering things that they think they believe!  They’ll react better, and probably be more open to rigorous self-examination, if you start on a positive note and coddle them a bit.  Right.

    But I don’t know if I want to coddle people in this way.  I’m not sure this is really the best thing to do in the long term.  I think there’s something painfully salutary in thinking to yourself, “Maybe deep down I don’t really think that black people are very smart, or at least important parts of me don’t think that.  Maybe my attitude toward the Heaven is mixed up and multivocal.” This is maybe a more profound kind of self-challenge, a fuller refusal to indulge in self-flattery.  It highlights the uncomfortable truth that our self-image is often ill-tuned to reality.

    So this issue, too, I could see cutting either way.

    Although all four defenses of intellectualism have some merit, none is decisive.  This tangle of reasons seems to leave us approximately at a tie so far.  But we haven’t yet come to…

    The most important reason to reject intellectualism about belief.

    The reason is this: Given the central role of the term “belief” in philosophy of mind, philosophy of action, epistemology, and philosophy and language, and given the importance of the term in our self-understanding (especially if we include the related attribution “I think that P”), we should reserve the term for something centrally important.  We should reserve the term for, probably, the most important thing in the psychological vicinity – the thing, whatever it is, that deserves such a central role in so many areas of philosophy and in our folk psychology.

    What we sincerely say to ourselves, what we intellectually endorse, is important.  But it is not as important as how we live our way through the world generally.  What I say about the intellectual equality of the races is important, but it’s not as important as how I actually treat people.  My sincere endorsements of religious or atheistic attitudes are important, but they are only a slice of my overall religiosity or lack of religiosity.

    For this reason, I favor – and I think Marley-Payne also favors – a broad conception of what it is to have a belief on which to have the belief that all the races are intellectually equal is, in part, to be disposed to sincerely say that they are all intellectually equal, but also just as much or more to be disposed to act and react generally as if it were true – to have the full suite of reactions to people, and spontaneous responses, and self-talk, and implicit assumptions, and emotional reactions, that fit with possession of that egalitarian attitude.  To believe the races are equal, or that God exists, or that snow is white is to steer one’s way through the world, in general, as though the races are equal, or God exists, or snow is white, and not only to be disposed to say they are.  It is this overall pattern of self-steering that we should care most about, and to which we should, if we can do so without violence, attach the philosophically important label “belief”.

    This is all just a complicated – I hope not too boring – way of saying I agree with Jack.

    1. So the guy asked me, “Do you believe in science?” (Again, figuratively between the eyes with a two by four.) It took a while. Finally, “It’s not a question of ‘believing in.’ It’s about acknowledging the facts.”

    2. I want to thank Eric for his sympathetic and thought-provoking comments. He provides a very interesting discussion of alternative methods of arguing for intellectualism, and I’ve enjoyed exploring these related issues. I’m sympathetic to much of what he says here (unsurprisingly), however, I think these points raise a number of interesting issues. I’ll go through them in turn.

      Intuitiveness: I agree that intuitions are mixed and don’t clearly favour intellectualism. Some other examples I find telling are intuitive attributions to animals and in sport. We’ll happily say ‘the dog knows it’s time for a walk’ or ‘Federer knew Nadal was going to hit a drop shot’, despite a lack of intellectual behavior – at least when the context is appropriate.

      Coherence with philosophy: I think philosophy as a whole does not clearly lean towards intellectualism. A couple of other pertinent examples: in philosophy of mind, the Dretske-Stalnaker information theoretic approach to the attitudes generally does not fit well with intellectualism; in epistemology, externalist accounts of knowledge – Williamson’s in particular – do not fit well with intellectualism. In short, whether intellectualism coheres with your broader philosophical theory depends on your philosophical theory.

      The Practical effects of Intellectualism: I agree that considerations as to the practical role of the term ‘belief’ should play a role in determining its extension, however I don’t think this favours intellectualism. First, the beneficial consequences Eric describes only apply in cases of implicit bias, not the whole family of cases of conflicting behavior. For example, Ben would not get offended and defensive if someone said to him that he believed that bridge was open. This might well be the most practically efficient way to get him to change his behavior. I don’t think we can decide on a case-by-case basis how applying the term ‘belief’ is most practically beneficial or we will end up with a completely gerrymandered account of the meaning. Instead, we must look at the general practical role of the term (I’ll discuss this further below).

      A second point is that even when we look only at implicit bias, I’m not convinced our focus should be on the self-understanding of those who are implicitly biased. We must also consider the effects our usage has on those who are victims of implicitly biased behavior. We should consider what allows victims of systematic injustice (of which implicit bias is a part) to best understand and combat their situation. It’s at the least debatable whether, for example, black Americans can best combat structural racism by viewing implicitly biased people as having good beliefs but unfortunate sub-personal attitudes. One might be better able to mobilize resistance if one describes those with implicit bias as possessing discriminatory beliefs.

      Conclusion: In summing up Eric claims that the central concern when deciding what counts as belief, we should choose ‘the most important thing in the psychological vicinity’. I agree, though I think it’s important to be clear on what is meant by ‘important’. If it was, ‘important to psychological theory’, the matter would be determined empirically by uncovering the underlying natural kinds. What I think Eric has in mind, though, is the practical significance of the term belief. I agree, though I think this claim takes some serious defense – this is something I do in my paper ‘Pragmatism about Knowledge’ (link on my website). If one accepts, as I argue, that the practical role of belief concerns, primarily, predicting and explaining behavior, then we have a good reason to reject intellectualism.

      1. Thanks for the thoughtful response, Jack! I think I agree with almost everything. I’ll have to check out your paper on pragmatism and knowledge, since I’m inclined to agree with Andrews, McGeer, Morton, and Zawidzki that philosophers have overplayed the role of predicting and explaining behavior through belief-desire attribution — there are simpler, commoner ways to predict (Andrews) and there are central social and self-regulatory functions of belief self- and other-attribution (McGeer, Morton, Zawidzki). I don’t know if you get into this perspective at all in your paper — I’m not seeing those names in the works cited.

      2. Great point. I was being super-rough when I associated the role with predicting behaviour. In the paper I do in fact argue that the role of belief attribution has a social function. I’ve not, unfortunately, related this to the authors you mention – though I will definitely now do so.

        I am perhaps more sympathetic than you are to the Dennettian idea that attitude attributions allow us to predict behaviour in a particularly efficient and systematic manner, but I agree that this cannot be the whole story.

        I think that the constraints of evidence sensitivity and inferential integration show that a central feature of beliefs is that they can be influenced in a systematic manner. When they respond to evidence and cohere with other beliefs, they tend to lead successful action. Thus I think that the role of belief attribution, when combined with knowledge attribution and rationality ascription, is to identify and promote successful agents. I take this to be mutually beneficial in a cooperative society.

  3. Thanks for posting.

    I’m not sure exactly what this paper is claiming. Are you arguing that, in cases of doing one thing and saying another, we ought to ascribe to people the belief(s) that their actions suggest they hold?

    Or are you arguing that in general, a person’s beliefs are as their actions suggest, and therefore we ought to ascribe to people the belief(s) that their actions suggest they hold?

    I have some points to make against this second option.

    A consideration against such an account of belief while ‘saying one thing and doing another’ is that it seems open to the objection that some beliefs could conceivably never be evidenced by non-linguistic behavior, and that these beliefs would therefore be indeterminate–we could never know whether S believes P as opposed to P v Q, because no action S ever performed allowed us to distinguish between them.

    Suppose I sincerely endorse a philosophical belief that is not verifiable through my behavior, such as that ostensible causal interactions are explained by Occasionalism. . It’s safe to say that my views on occasionalism will be causally inert with respect to all of my non-linguistic behavior. But let’s say my doppelganger has the opposite philosophical view about causal powers. Since I and my doppelganger sincerely endorse mutually exclusive propositions while manifesting identical non-linguistic behavior, it follows that non-linguistic behavior is not a reliable guide to beliefs that are inert with respect to non-linguistic behavior, such as philosophical beliefs.

    The consequence of this kind of view as I see it is a deflation of beliefs to ‘acceptances,’ namely causally efficacious beliefs. Clearly, causally inert beliefs exist. Insofar as your account hints at irrealism about them, I think realists will take issue with it.

    1. Thanks for the comment Jason. The short answer is that I think that both intellectual and non-intellectual states are beliefs. In the paper I only really provide arguments against intellectualism without specifying which alternative is correct. Of course if none of the alternatives are plausible, the arguments will be less convincing (this is why I try to defend a positive view in a follow up paper ‘Task Indexed Belief’).

      I agree that the view that only states manifested in non-intellectual behaviour are beliefs is unattrative for the reasons you suggest. I would add that speech is itself a highly significant kind of action, so it would counter-productive to ignore it when giving an account of belief.

      I should note though that I’m sympathetic to functionalism, and thus I don’t think it’s possible to have beliefs that are necessarily causally inert – though the causal consequences may be assertion or conscious endorsement. I don’t see this as unattractive.

      This leads me to an inclusive view on which both intellectual and non-intellectual states are beliefs. As Keith Frankish notes in his comments, this leads to the somewhat surprising consequence that there are many more cases of contradictory belief than one might initially suppose. I explain how I think we should respond to this in my response to him.

  4. Thanks for this paper Jack, I’m inclined to agree with most of the claims you make here. I was wondering if I could ask about an issue slightly different to the one you tackle though, and which I had at first thought your paper would be about. There can be cases, it seems, where someone’s actions and reports come apart, but the actions are in no particular sense ‘unconscious’ or ‘automatic’ or ‘implicit’. The example I heard, so long ago that I forget the source, is the member of an apocalyptic religion who sincerely tells everyone that the world is ending next week, but keeps on safeguarding their future interests (planning for college, retirement, etc.) in a way that would make no sense if the world really was ending. But this latter pattern of behaviour seems to be driven by conscious thinking, by ‘system 2 processes’, etc. Do you take the action-based and intellectualist theories of belief to have implications regarding what this person believes, or is this a case of something different, like self-deception?

    1. Thanks for your comment Luke. I think it points to the natural next point of investigation, once we look at where to go if we’re rejecting intellectualism.

      As Keith Frankish notes, if we adopt a broader account of belief, we are committed to ubiquitous (prima facie) rational contradictory beliefs. One might think this is a problem, and thus pushes us back towards intellectualism.

      I think your example shows how intellectualism is no better off than my preferred view. This kind of conflict can occur between two different kinds of intellectual states just as much as between a conscious and an unconscious state. Another example, is that someone might assert one thing while representing the opposite in their visual memory.

      In my view, this observation pushes us to a radical revision of belief according to which it is a three-place relation indexed to tasks. I say more about how I think this goes in my response to Keith. I’d be very keen to discuss this further, but maybe it would be best to hear your thoughts on what I’ve said so far before going on.

      1. Thanks for the reply Jack. I think the idea of task-indexed belief is really interesting, and certainly seems to capture something important, but I wonder how it answers the following sort of worry: that part of what distinguishes beliefs-that-guide-action from states-that-produce-action is the range of actions they can produce. If I have a certain mental state that, whenever I see a bright light, prompts me to flinch, but has no other effects, we wouldn’t call that a belief that ‘one should flinch before bright lights’ (or a belief-desire pair, or whatever). It would just be a reflexive sort of disposition. To have a belief, I have to have something that will do an indefinitely wide range of things, when combined with an indefinitely wide range of other mental states.

        But on your view it seems like I could have a belief relative to only one, narrowly-defined, task (based on a very quick skim of your paper). I guess I don’t see why that’s a matter of belief and my reflexive response to bright lights isn’t. Is there a cut-off point of generality, so that beliefs must be held at least relative to some (hopefully non-arbitrary) minimum degree of generality? Or is there just a vague gradation, with the limiting case of belief (belief relative to a single, maximally specific, task) being something we wouldn’t normally call a belief?

      2. That’s a good question – and thanks for taking the time to look over my paper.

        I think the issue of overly narrow tasks is a serious worry, and I discuss it towards the end of the paper. The central point is that I want to combine the indexing move with the evidence sensitivity and inferential integration constraints I discuss in the conference paper.

        Therefore, if a state is a belief, it must be part of a family of information states that together guide a range of action in a systematic manner. This (I claim) entails that the tasks belief can be indexed to must be reasonably broad, and so avoid the kind of problem you raise.

  5. Hi Jack, thanks for this really interesting paper. As you’d expect given our previous conversations about these things, I’m less enthusiastic about your position than Eric! Let me try to press you on a few points, with apologies if I’ve overlooked places where you address these things already:

    • First, I wonder whether the intellectualist needs to accept the framing you give at the start, i.e. that in her view belief is tied to assertion rather than action. There are several reasons why an intellectualist might wish to deny this, e.g.: that assertion is itself a kind of action; that just as all behavior lies on a spectrum between intelligent (/reflective) and unintelligent (/reflexive), so it is with speech; and that — this last point weighs really heavily for me — with assertion (or self-conscious judgment) there’s a particular kind of room for insincerity (with oneself and others) that isn’t so easily there with behavior. (Thus the common slogan that “actions speak louder than words”, which I don’t think entails that belief isn’t intelligent, but only that speech isn’t always the best manifestation of it.) If an intellectualist took one of these routes, how much of your ensuing argument would she have to disagree with?
    • Second, it seems that there are some variants of intellectualism that can resist your challenges to the criteria of evidence-sensitivity and inferential integration. One of these would be a view of evidence-responsiveness like the one Grace Helton develops in her paper. Another might appeal to considerations like those in Ram’s and Markos’s papers in claiming that (i) the concept of belief is tied to those of reasoning and justification, and (ii) reasoning and justification are essentially self-conscious, intellectualist affairs (though of course it requires work to say what this last idea comes to). Do you think these positions are promising at all?

    • Third, I wonder what you’d say to a more concessive intellectualism, like Keith’s I suppose, that grants your point that there’s a “practically significant” concept of belief that’s non-intellectualist, but insists that the intellectualist concept is practically useful as well, say because it provides a measure of sincerity, or because there may be a fuller kind of virtue (or vice) in believing things in both the non-intellectual and intellectual senses. (This last is close to Gendler’s position, I suppose. Eric argues something similar in his papers, but doesn’t accept that there are really two concepts here.) Do you think instead that this concept of intellectual belief doesn’t pick out anything practically significant at all?

    I could ask more, but I’ll leave it there for now …

    1. Thanks for the these questions John. I think they’re pushing me towards revealing how my views are more sweeping and revisionary than perhaps I was letting on!

      With your first point, I very much agree that the intellectualist is not forced to tie her notion of intellectual states to assertion – there are alternatives available. However, I am sceptical that any of them fare any better. I wasn’t quite sure what alternative you had in mind here (unless you were just setting things up for your second point). Are you suggesting that only states that are strongly sensitive to evidence are really beliefs? If so my worry would be that there are very few beliefs indeed and so they don’t play much of an explanatory role.

      With your second comment, I think the upshot is that I need to expand my anti-intellectualism to a range of epistemic concepts, in order for the view to be stable.

      With respect to Ram and Marco’s view, I disagree with the second claim – I see no reason why reasoning and justification must involve self-conscious states. We can have unconscious information states that update in response to both each other’s content and incoming evidence – and we can see whether such updates accord with appropriate rules of inference. I’m inclined to think this provides us with all we need for reasoning and justification. I would want to back this up with analogous arguments to the ones I gave in the paper (there’s nothing especially intelligent about self-conscious reasoning).

      I’m not sure why Grace’s view would pose a problem for me. I’m inclined to adopt something like the revisability constraint on belief (albeit in a way that allows for religious beliefs). Is your worry that she talks about rational revision in the face of norms? I’m inclined to think that these concepts too apply to non-intellectual states.

      With respect to your third point. I certainly think that beliefs are incredibly varied, and that there are important distinctions between different kinds of belief. However, I think there are many important distinctions to be made – with criteria coming from practical considerations, normative considerations, and psychological kinds – and they do not coalesce into a dichotomy between intellectual beliefs and non-intellectual beliefs.

      I try to say something more about why this is so in my response to Keith – in particular why I don’t think dual system research in psychology is any help. This is what draws me to an indexed view of belief which gives us a flexible tool to pick out the different roles of different beliefs. Do you have anything in mind as an alternative way to draw a theoretically privileged intellectual/non-intellectual distinction?

  6. Hey Jack,

    Very interesting paper! I think your criticisms of the arguments for it are persuasive, so, like Eric’s, this is a “boringly-in-agreement” comment. I just wanted to mention, since it came up, that I myself take the constraint on belief I posit in my paper to be consistent with an anti-intellectualist stance. I don’t see why belief’s being necessarily revisable should entail or even make plausible that belief is specially tied to verbal report.

    1. Hi Grace, I didn’t mean to imply that your view would tie belief to assertion or verbal report, but only that it might give a way of marking the intellectual/non-intellectual distinction (or some similar one) in such a way that belief falls clearly in the first class. I’ll say more about this in replying to Jack later on.

Comments are closed.

Tweet
Share
Share