Featured Video Play Icon

What Reasoning Might Be

Markos Valaris, (University of New South Wales)

[PDF of Markos Valaris’s paper]

[Jump to Matthew Boyle’s commentary]
[Jump to Zoe Jenkin’s commentary]
[Jump to Chris Tucker’s commentary]
[Jump to Markos Valaris’s reply to commentaries]

Abstract

The philosophical literature on reasoning is dominated by the assumption that reasoning is essentially a matter of following rules. This paper challenges this view, by arguing that it misrepresents the nature of reasoning as a personal-level activity. Reasoning must reflect the reasoner’s take on her evidence. The rule-following model seems ill-suited to accommodate this fact. Accordingly, this paper suggests replacing the rule-following model with a different, semantic approach to reasoning.

1. Introduction

Reasoning is an activity familiar to all of us. But what exactly does one do when one reasons? For example, consider a subject who knows the following:

    1. If Socrates is human, then he is mortal.
    2. Socrates is human.

We naturally think that there is a cognitive act — albeit a rather trivial one, in this particular example — that the subject can perform in order to get to know the following:

    1. Socrates is mortal.[1]

What is the nature of this cognitive act?

According to many, reasoning is fundamentally a matter of following rules.[2] This is not just the relatively innocuous claim that reasoning (or at least good reasoning) can be described or captured by rules. It is the stronger claim that our subject gets to know that Socrates is mortal in virtue of being guided by, or following, a rule — a mental analog of the rule of modus ponens familiar from propositional logic.

The rule-following model is not often explicitly defended as such; most contemporary work on the nature of reasoning simply appears to take it for granted (see, e.g., Boghossian [2003; 2008; 2014], Broome [2013], Ichikawa and Jarvis [2013], Wedgwood [2002; 2006; 2007], Wright [20014]).[3] This should be surprising, because (as we shall see) the rule-following model is at odds with some deep-seated intuitions about the nature of reasoning. Reasoning, as a personal-level activity, seems to be a paradigmatic case of epistemic agency, or the kind of control that we have over our own minds. One natural corollary of this idea is that reasoning must reflect the subject’s own take on her evidence. The rule-following model has trouble accommodating this thought.

Now, the fact that the rule-following model faces trouble in this area has not gone unnoticed. Paul Boghossian (2003; 2008; 2014), in particular, has written extensively and forcefully on the topic. Recognizing those difficulties, however, has not led Boghossian to reject the rule-following model: on the contrary, he suggests accepting rule-following as a basic and unanalyzable mental capacity (2014, 16-18). Boghossian, like other proponents of the rule-following model, does not even consider alternatives. My positive goal in this paper is to develop just such an alternative.

On the rule-following model, the rules that guide reasoning are formal, in a sense analogous to the sense in which the rules of inference that characterize a formal system are: just as the latter deal only with syntactic objects within the system rather than with the subject-matter the system is intended to capture, the rules of reasoning deal only with our attitudes, rather than the subject-matter of our reasoning (although some proponents of the rule-following model, such as John Broome [2006; 2013], would like to avoid “higher-order” conceptions of rule-following, as we shall see it is very doubtful that their attempts can succeed). Intuitively, however, our reasoning is not guided by thoughts about our attitudes and their contents; it is guided by thoughts about the world. This suggests that we should think of reasoning in semantic terms. As I hope to show, such an approach to reasoning not only makes it easy to accommodate the role of the reasoner’s own take on her evidence in the activity of reasoning, it also allows for a more satisfying account of the place of reasoning in our cognitive lives.[4]

2. Frege’s Condition

My argument relies on a certain condition upon theories of reasoning — namely, that they must explain how reasoning reflects the subject’s own take on her evidence. My aim in this section is to explain and motivate this condition.

Consider Frege’s (1979, 3) characterization of inference, which Boghossian (2014, 4) also quotes approvingly:

To make a judgment because we are cognisant of other truths as providing a justification for it is known as inferring.

As Boghossian notes, Frege’s characterization has to be amended to allow for inferences based on false premisses, as well as for inferences in which the premisses do not actually support the conclusion. But for present purposes the central feature of Frege’s characterization is the claim that inferring p from a set of premisses R requires taking R to provide justification or support for p, and coming to believe p (partly) because of this. This is what Boghossian (2014, 5) calls the “taking condition” on inference, and what I will call “Frege’s condition”. Is this a reasonable condition?

Some authors use the terms “reasoning” and “inference” not just for personal-level performances, but also for sub-personal information processing. For instance, humans are pretty good at judging the emotions of other people on the basis of subtle facial and behavioral cues. Some authors would be happy to take such judgments to be the conclusions of unconscious inferences (e.g., Johnson-Laird 2008, 60–72). On such broad usage, Frege’s condition seems clearly false: when making such a judgment you need not be aware of the grounds on which you have made it. Thus, in endorsing Frege’s condition, I am implying that such judgments are not inferences. But why should the application of the terms “reasoning” and “inference” be restricted in this way?

The reason is that such broad usage obscures a crucial point. In one central sense of these terms, reasoning or inferring are things that we do. Reasoning is an expression of agency on our part; it is an exercise of the sort of control that we have over our cognitive lives. One way to bring this fact out is by noting that it has distinctive normative import: if you make a bad inference, we can legitimately criticize you as having been hasty, irresponsible, biased, and so on. By contrast, there is only a very thin sense in which the subject herself is responsible for her immediate judgments about another’s emotional state. If you misread another’s facial expressions, your mistake is more akin to a perceptual illusion than a case of bad reasoning. A very natural way to explain this difference is to say that inferring reflects the subject’s take on what her evidence requires. By contrast, our system for judging other people’s emotional states appears to be a lower-level system, whose workings are opaque to us. Marking this difference is the point of Frege’s condition.[5]

Now, the question naturally arises as to what sort of state the “takings” required by Frege’s condition might be. I would like to leave this question as open as possible, except for one important constraint. Consider the following case. Tom has some irrational theoretical beliefs. For example, he believes that certain spots on people’s faces indicate that they have been marked by a demon, and once so marked they will soon die. Tom sees such spots on Bob’s face. As a result of his theoretical beliefs, he takes it that the spots on Bob’s face is evidence that Bob will soon die. As it happens, the spots on Bob’s face are a sign of advanced disease, and so their presence does in fact indicate that Bob will soon die. And yet Tom’s belief that Bob will soon die is not, intuitively, justified, no matter how reliable a sign of impending death the spots might be. This is because, although the presence of the spots on Bob’s face does support the conclusion that he will soon die, Tom (in light of his irrational theoretical beliefs) is not justified in taking them to support this conclusion.[6] Thus the takings required by Frege’s condition must exemplify states that can be assessed for epistemic justification.

The reason I emphasize this point is that it rules out views — such as those proposed by John Broome (2013) and Chris Tucker (2012) — that identify the relevant takings with intellectual seemings. Since seemings are not the sort of thing that can be either rational or irrational, such views would have trouble explaining what is wrong with Tom’s inference above. After all, it surely seems to Tom that the spots on Bob’s face are evidence that he will soon die, and this seeming is veridical.[7]

Now, the archetype of a state that can be assessed for epistemic justification and rationality is belief. This suggests that the takings required by Frege’s condition might be beliefs. Since the nature of belief remains a controversial topic, this suggestion does not take us very far. Fortunately, more detail is not necessary for our purposes. I will occasionally speak as if the takings required by Frege’s condition are beliefs but, so long as the point about epistemic assessment stays in view, nothing much hangs on this. (For example, views according to which the relevant takings are more akin to non-cognitive states of endorsing a norm — in something like the sense of [Gibbard 1986] — are not ruled out by this requirement, since such states are supposed to be open to rational assessment.)

Finally, one might wonder what exactly the content of the takings required by Frege’s condition is. Once again, I want to leave this question as open as possible. Frege’s condition requires that, in inferring p from a set of premisses R, you must believe that R — in some sense — supports p. Different views might differ as to what exactly the relevant relation of support is, and even how exactly one must conceive of the relata. I will explain my own view in Section 4.

Let us now turn to the question whether the rule-following model is compatible with Frege’s condition.

3. Frege’s Condition and Rule-Following Theories of Inference

Consider a subject performing the elementary inference from Section 1:

    1. If Socrates is human, then he is mortal.
    2. Socrates is human.
    3. Therefore, Socrates is mortal.

Assuming that Frege’s condition holds, our subject must take it that (1) and (2) somehow provide justification for believing (3), and come to believe that Socrates is mortal partly because of this. Can rule-following theories explain how our subject meets this condition?

Let us begin by considering what the rule governing this bit of reasoning is. As most contemporary theorists recognize, this rule is not the familiar modus ponens rule of propositional logic: that is a rule that concerns strings of symbols in a formal language, not a rule of reasoning. We want something analogous, but concerning transitions among beliefs. Such a rule might be formulated as follows:

(MP) If you are rationally permitted to believe both that p and that ‘If p, then q’, then you are prima facie rationally permitted to believe that q. (Boghossian 2008, 472)

Suppose that our subject is reflective and logically astute, and so can plausibly be said to believe that (MP) is a good rule. Can this belief help explain how our subject meets Frege’s condition? It is hard to see how it could. After all, (MP) says nothing about Socrates or his mortality; so how can it explain our subject’s coming to believe that Socrates is mortal?

One way for it to do so would be this. Our subject can substitute in (MP) as follows :

(a) If I am rationally permitted to believe that Socrates is human, and that if Socrates is human then he is mortal, then I am prima facie rationally permitted to believe that Socrates is mortal.

Assume, further, that our subject knows what she believes, and moreover that she is justified in taking her own beliefs to be rationally permissible ones (although neither of these assumptions is entirely innocent, of course). Then she can also rely on the following premiss:

(b) I am rationally permitted to believe that Socrates is human, and that if Socrates is human then he is mortal.

And from these two premises, she can conclude:

(c) I am rationally permitted to believe that Socrates is mortal.

Assuming, now, that coming to believe (c) is sufficient to get our subject to form the first-order belief that Socrates is mortal, her reasoning is done. Familiarly, however, nothing like this will do as a fundamental account of how we reason. This is because the transition from (a) and (b) to (c) in this argument is itself a modus ponens transition. Taking the subject to perform this inference, therefore, presupposes exactly the capacity that we were hoping to explain.[8]

I have assumed so far that rules figure in our subject’s thinking as the contents of beliefs. But this assumption may well be challenged: perhaps being committed to a rule of reasoning is a sui generis type of state, which is on the one hand assessable for rationality and on the other capable of directly motivating belief in the right circumstances. Thus, for example, upon coming to believe that if Socrates is human then he is mortal and that Socrates is human, a subject committed to the rule (MP) does not have to reason to the conclusion that she is rationally permitted to believe that Socrates is mortal — she need be in no doxastic state from whose content such a conclusion would follow. Rather, her commitment is manifested in the fact that she is disposed to believe in this way. Such a view, therefore, would seem to avoid having to explain reasoning in terms of further reasoning.

One problem with such views is that we seem to have no clear conception of what such non-doxastic states of commitment are. Suppose, first, that such states simply consist in dispositions to believe in the relevant ways. This is not satisfying, because a subject’s disposition to infer in accordance with — say — (MP) would be in need of explanation, just as much as her actual modus ponens inferences. Thus the relevant states cannot be mere dispositions to believe, but rather states that ground and explain the subject’s dispositions to believe in accordance with the relevant patterns — say, in accordance with the rule (MP).

Crucially, however, not just any sort of explanation would do: what we want is a kind of rationalizing explanation, an explanation that would show the relevant patterns of belief to be rational from our subject’s point of view.[9] Thus the states in question need to be states of a sort such that, in ascribing them to the subject, we partially specify her epistemic perspective on a certain subject-matter — namely, what would be rational for her to believe. Needless to say, paradigmatic states of this sort are beliefs — which, by hypothesis, the states in question are not.

But the real difficulty for the rule-following model in this area does not have to do with the precise nature of the states in question. The real difficulty is simply that the proposed rules of reasoning are the wrong type of thing to play the role required of them in reasoning. Consider again rule (MP) above. It is a rule about our subject’s own beliefs. But our subject is not supposed to be reasoning about her own beliefs; she is supposed to be reasoning about Socrates and his mortality. It is therefore no surprise that we have trouble finding a place for (MP) in her reasoning.

Although advocates of the rule-following model are quick to point out that the rules of reasoning are not the same thing as the rules of a formal system, the above considerations show that rules of reasoning they propose tend to remain formal, in a very important sense. They are rules that concern relations among representations — your beliefs or their contents — rather than the subject-matter you are reasoning about. It is this formality that makes it hard to see how the rule-following model can accommodate Frege’s condition. Intuitively, when you reason your attention is on the world, not on your beliefs or their contents (of course, your beliefs constitute your access to the world, and thus in reasoning you inevitably operate with your beliefs; but this is not the same thing as operating on your beliefs or their contents). Thus there is always a gap between what your reasoning is intuitively about and the manipulations of representations that the rule-following model predicts. Of course, a rather trivial bit of reasoning will typically bridge this gap: as we saw, a subject can reason from her grasp of the rule (MP) to the conclusion that she is rationally permitted to believe that Socrates is mortal (at least given certain assumptions about self-knowledge and the alignment of higher-order and first-order beliefs). But the need to appeal to reasoning at this point strongly suggests that following rules cannot be the most fundamental characterization of what we do when we reason.

Seen in this light, attempts to avoid the problem with the help of dispositions to believe or sui generis states of commitment simply come too late. They may avoid (by fiat) the circularity of having to appeal to reasoning in order to explain reasoning, but they do not address the more fundamental issue — namely, the fact the rules around which the rule-following model is based simply seem to have no rational bearing on what ordinary subjects usually reason about.

But do rules of reasoning on the rule-following model have to be formal in this sense? It is sometimes suggested that the problem with rules such as (MP) is that they are formulated as higher-order statements, i.e., as statements explicitly about beliefs. This is an important theme for John Broome (2013), for instance. Broome’s own formulation of the relevant rule is this:

From p and (if p then q), to derive q. (2013, 234)

I think Broome is correct to want to avoid higher-order rules. Unfortunately, his formulation does not succeed. For, what is it “to derive q”? From context, it looks like “to derive q” just means to come to believe q by reasoning. But then, while Broome’s rule avoids some references to beliefs, it does not avoid all of them: it remains a higher-order rule in disguise.

There is a deeper lesson here: it is very hard to see what a genuinely first-order rule of reasoning would look like. A rule of reasoning is supposed to instruct you on what to believe, given what else you know or believe. How could a rule do this without talking about beliefs? Consider the following statement which, as a statement about propositions, might be construed as an attempt at capturing Broome’s (2013, 231-232) idea that reasoning is “operating on contents”:

The truth of any two propositions of the form p and ‘if p then q’ is conclusive evidence for q.

Clearly, if this is going to count as a genuinely first-order statement, we need to understand the notion of evidence in play otherwise than in terms of “making it appropriate to believe” and the like. But then in what sense could this, or anything like it, be construed as a rule of reasoning? Such a “rule” would be impossible to follow, because it would not tell you what to do. Going first-order is not really an option for the rule-following model.

The same point holds in the case of rules that govern so-called material inferences, or inferences that are intuitively but not formally valid (and also, of course, in the case of inductive or abductive rules as well). Suppose, for example, that Alma reasons from “the roses are red” to “the roses are colored”. What might be the rule guiding Alma in her inference? The first-order statement:

For all x, if x is red then x is colored

is simply a universal generalization, not a rule of inference. It does not tell Alma what to do. What we need, rather, is a higher-order statement that would instruct our subject that she is permitted (or perhaps required) to believe that something is colored upon learning that it is red. Even rules of material inference turn out to be formal in the relevant sense.

All this suggests that we need a new approach in the theory of reasoning, one which departs from the rule-following model. The approach I will sketch never requires the reasoner to think about anything other than the subject matter of her reasoning. This means that rules of the sort we have been considering play no role in reasoning either. As we will see, this is what makes it possible for this approach to smoothly accommodate Frege’s condition.

4. A Semantic Approach to Reasoning

I take the premisses, conclusions and intermediate steps of reasoning to be contentful statements, not empty strings of symbols. Intuitively, understanding a statement involves knowing how it represents things as being, or what things have to be like for it to be true. It seems natural to analyse such knowledge in terms of possibilities, or ways for things to be. To understand a statement is to know which of the ways for things to be are such as to make it true. Coming to believe or accepting a statement involves ruling out possibilities in which the statement is not true. The approach I develop below is based on the familiar idea that the epistemic aim of reasoning is to reduce uncertainty about the world, via the elimination of alternative ways the world might be (see, e.g., Robert Stalnaker 1987).

Notice that, since our topic here is human understanding with its familiar limitations, not all of the relevant “possibilities” or ways for things to be are possible worlds: ways for things to be in our sense need not be complete or even logically closed. This allows us to accommodate the fact that you can understand or believe a statement without grasping all of its logical consequences (although there may well be a sense in which you are committed to all the logical consequences of what you believe). You can believe p and “if p then q” without thereby also believing q, since ways for things to be in which p and “if p then q” are true but q is not are not automatically ruled out.[10] So only a subset of these ways for things to be are real possibilities — the classical possible worlds, say.

Knowing what things have to be like in order for a statement to be true does not require being able to give an informative description of the relevant possibilities. It only requires being able to pick out the relevant possibilities upon considering them. Picking out a possibility as one that makes a statement true is not just a brute arational response, but rather an exercise of a cognitive skill, in the same sense that picking out Barack Obama from a crowd of people is an exercise of a cognitive skill (in the latter case, a skill of perceptual recognition). Just as being able to pick out Obama is plausibly constitutive of my knowing who Obama is, being able to pick out possibilities in which a given statement is true upon considering them is constitutive of my knowing what things have to be like in order for it to be true — i.e., of understanding that statement.

Notice that this is consistent with propositional accounts of “knowing wh-” (see Stanley 2011, chap. 2 for an illuminating overview). My knowing who Barack Obama is, for example, might consist in my knowing the relevant range of demonstrative propositions of the form “this is Barack Obama”, in the right perceptual contexts. A similar account is plausible for a subject’s knowing what things have to be like for a statement to be true: it consists in knowing, upon considering a relevant possibility that makes the statement true, that this possibility makes the statement true.

An important consequence of such an account is that my knowledge that this is Barack Obama on a particular occasion is not inferred from my general knowledge of who Barack Obama is: it is, rather, a constitutive part of that knowledge. (Of course, perceptual recognition requires extensive information processing. As explained in Section 2, however, this does not mean that it involves reasoning.) The same holds of a subject’s capacity to pick out the possibilities that make a statement she understands true: it is not inferred from her knowledge of what things have to be like in order for the statement to be true, but is rather constitutive of it.

What is it to consider a possibility? Suppose you are contemplating making a move in chess. You begin by noting how it will change the position of the pieces on the board and how these changes will affect the balance of threats among them. You will then consider different possible responses by your opponent, and then counter-responses on your part — perhaps going a few moves deep. These are all examples of considering possibilities. Good players will be thorough in their search of the space of possibilities and efficient in deciding which possibilities are worth taking seriously. Worse players will be less so.

Different accounts of this type of activity are possible. What matters for present purposes is that the most fundamental way we have of considering possibilities must not itself be a matter of inference. This seems intuitively plausible: an experienced chess player, for example, can simply call to mind the possibilities afforded by a configuration of pieces on the board, without needing to derive them from the rules of chess. Developing such a chess-playing imagination is, quite plausibly, constitutive of becoming a skilled chess player. Such a capacity may take different forms. On one account that is prominent in the psychological literature, you consider possibilities by constructing mental models in working memory (Johnson-Laird 1983; Byrne 2007). Perhaps this involves offline sensory simulation, at least in some cases (Williamson 2008). While such capacities will of course involve extensive information processing, there is again no need to construe this as personal-level reasoning.

So how does all this help with reasoning and Frege’s condition? In the present framework, it seems natural to say that p follows from a set of statements R just in case there are no real possibilities in which all members of R are true and p is not. Believing that p follows from R plausibly consists in ruling out all ways for things to be in which all members of R are true but p is not. Now suppose you already believe R, and hence all ways for things to be that are open to you are such that all members of R are true. In this context, coming to believe that p follows from R just is coming to believe p, since it consists in ruling out all ways for things to be in which all members of R are true and p is not — which, in the present context, means ruling out all possibilities still open for you in which p is not true. Thus this approach smoothly accommodates Frege’s condition. A subject infers p from R in virtue of recognizing that R constrains the ways things might be so as to guarantee that p is true. (Of course, subjects need not explicitly articulate thoughts of this complexity. They might, instead, express themselves entirely in the material mode: “R; so, p”.)

One might wonder here whether I am not merely replacing the various formal rules of inference recognized by the rule-following model with a very general rule, based on the intuitive definition of validity. But this is not so. The point is easiest to see in a case of an intuitively — thought not logically — valid inference, say from “the roses are red” to “the roses are colored”. In believing the premiss of this inference, Alma rules out all worlds in which roses are not red. But suppose Alma has not so far as much as considered the question whether roses are colored. Thus, while she is of course (in some sense) committed to their being colored, on our model this does not yet count as a belief of hers. When Alma considers the matter, of course, she immediately sees that there are no real possibilities in which roses that fail to be colored while being red: this is simply an exercise of her non-inferential capacity to recognize possibilities that make statements she understands true or false. Thus any such ways for things to be are now ruled out for her. As discussed in the last paragraph, this is exactly what it takes for Alma to recognize that it follows from the roses’ being red that they are colored. But since Alma has already ruled out all ways for things to be in which roses are not red, in ruling out ways for things to be in which roses are red but not colored she thereby comes to believe that roses are colored. Her reasoning is done, without any application of a rule of inference.

A similar account, with some further assumptions, could work for non-deductive reasoning as well (non-deductive reasoning, being notoriously hard to codify, remains a problem for the rule-following model). Suppose that Raji sees Bob walk out of the examination room looking happy. She infers that Bob did well on his exam. Her inference is not deductive: even given her background folk-psychological knowledge, Raji cannot rule out all possibilities in which Bob’s happy demeanor coexists with his having done poorly on his exam. Thus Raji cannot infer deductively that Bob did well on his exam. But suppose we are willing to grant that Raji knows that possibilities in which Bob’s happy demeanor co-exists with his having done poorly on the exam are, in some sense, far-fetched or abnormal, and that, absent any evidence to the contrary, she is justified (perhaps by some sort of default entitlement, in the sense of Wright [2004]) in ignoring them. If all this is granted, then Raji is in a position to infer that Bob did well on his exam, by restricting her attention to non-far-fetched possibilities. Once again, notice that Frege’s condition is smoothly satisfied: Raji’s inferring that Bob did well on his exam just is her recognizing that all the possibilities that make her premiss true, and which additionally satisfy the assumption of normality, make her conclusion true.

So what is the point of formal rules of inference, on this account? Return to the modus ponens inference discussed earlier. Suppose our subject grasps and accepts the statements “if Socrates is human then he is mortal” and “Socrates is human”, and accordingly rules out all ways for things to be in which either of them is false. Given our assumptions, however, this does not mean she automatically rules out all ways for things to be in which Socrates is not mortal: there is a further cognitive act she needs to perform in order to rule out ways for things to be in which both premisses hold but Socrates is not mortal. But, of course, it is an important fact about this particular example (and others like it) that this further cognitive act ultimately depends only on how the original statements were put together, and not on anything specifically to do with Socrates or mortality. This reflects a structural feature of the space of possibilities which a subject can come to recognize. In particular, a subject fluent with the conditional should, in principle, be able to recognize that a statement of the form “if P then Q” commits her to ruling out all ways for things to be in which P is true but Q is not. This is a cognitive achievement for our subject, since on our model such possibilities are not automatically ruled out in virtue of having beliefs of that form. In this way our subject gains insight into the logical structure of the space of possibilities, of a kind which was not available to her before. This, I take it, is the point of the rule of modus ponens, and more broadly of formal rules of inference: they are not rules for reasoning, but for describing the structure of our commitments (a point also argued by Gilbert Harman [1986]).

5. The Place of Reasoning in our Cognitive Lives

In the last section I sketched a semantic approach to reasoning. But does this approach really help us understand what the activity of reasoning fundamentally consists in? I can imagine a potential objector pointing out that my approach liberally appeals to fairly sophisticated cognitive skills, such as skills for considering possibilities and evaluating propositions in them. These cognitive capacities are, as the objector might reasonably claim, no less in need of an explanation than our capacity to reason itself. By way of a conclusion to this paper, I would like to say something about where, in my view, our capacity to reason should be situated in the larger picture of our cognitive lives.

Reasoning is a paradigmatic case of cognitive agency — a central example of the sort of control we have over our own cognitive lives. As such, it is a high-level cognitive skill. For this reason, it is no surprise to find that it works by drawing upon other cognitive capacities, such as imaginative capacities or capacities for sensory simulation. Such capacities are, of course, highly complex in their own right: they draw on our grasp of folk physics, folk psychology, knowledge of chess, and more. They are certainly worthwhile topics for further study. But I think it is actually an advantage of my approach that it helps us see how our capacity to reason is constitutively connected with other cognitive capacities, including such high-level ones.

One important consequence of this fact is that the present approach can draw on a rich array of resources to explain striking patterns in reasoning performance. Consider, for example, the much-discussed fact that people find reasoning tasks easier when they are specified in familiar terms than when they are specified in abstract or nonsensical terms, even if the tasks are formally identical.[11] From the point of view of rule-following theories, this fact must seem mysterious: shouldn’t reasoning simply abstract from content altogether? But if reasoning fundamentally involves the consideration of possibilities, then — in principle, at least — there need be no mystery here.[12] The exact psychological mechanisms will need to be worked out empirically, of course, but in principle it is not surprising that people will find possibilities concerning familiar topics easier to think about than possibilities specified in unfamiliar terms.

In this paper I have tried to show that there are important philosophical reasons to abandon the view that reasoning, conceived as a personal-level activity, is fundamentally a matter of following formal rules. I suggested, instead, that we should think of reasoning in semantic terms. This, as I have tried to show, can result in a better understanding of the activity of reasoning and its place in our cognitive lives.

References 

Andrews, Avery. 1993. “Mental Models and Tableau Logic.” Behavioral and Brain Sciences 16 (02): 334.

Anscombe, G. E. M. 1957. Intention. Cambridge, MA: Harvard University Press.

Boghossian, Paul. 2003. “Blind Reasoning.” Aristotelian Society Supplementary Volumes 177: 225–48.

———. 2008. “Epistemic Rules.” Journal of Philosophy 105 (9): 472–500.

———. 2014. “What Is Inference?” Philosophical Studies 169 (1): 1–18.

Brewer, Bill. 1995. “Mental Causation II: Compulsion by Reason.” Aristotelian Society Supplementary Volumes 69: 237–53.

Broome, John. 2006. “Reasoning with Preferences?” In Preferences and Well Being, edited by Serena Olsaretti, 183–238. Cambridge: Cambridge University Press.

———. 2013. Rationality through Reasoning. Chichester: Wiley Blackwell.

Bundy, Alan. 1993. “‘Semantic Procedure’ Is an Oxymoron.” Behavioral and Brain Sciences 16 (02): 339–40.

Byrne, Ruth. 2007. The Rational Imagination: How People Create Alternatives to Reality. London: A Bradford Book.

Carroll, Lewis. 1895. “What the Tortoise Said to Achilles.” Mind 4: 278.

Frege, Gottlob. 1979. “Logic.” In Posthumous Writings, edited by Gottfried Gabriel, Hans Hermes, and Peter Long, 1–9. Oxford: Blackwell.

Fumerton, Richard. 1995. Metaepistemology and Skepticism. Lanham, MD: Rowman and Littlefield.

Gibbard, Allan. 1986. “An Expressivistic Theory of Normative Discourse.” Ethics 96 (3): 472–85.

Harman, Gilbert. 1986. Change in View. MIT Press.

Ichikawa, Jonathan Jenkins, and Benjamin W. Jarvis. 2013. The Rules of Thought. OUP Oxford.

Jago, Mark. 2014. The Impossible: An Essay on Hyperintensionality. Oxford: Oxford University Press.

Johnson-Laird, Philip. 1983. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Cambridge, MA: Harvard University Press.

———. 2001. “Mental Models and Deduction.” Trends in Cognitive Sciences 5 (10): 434–42.

———. 2008. How We Reason. Oxford: Oxford University Press.

Johnson-Laird, Philip, and Ruth Byrne. 1991. Deduction. Hove, UK: Psychology Press.

Johnston, Mark. 1988. “Self-Deception and the Nature of Mind.” In Perspectives on Self-Deception, edited by Brian McLaughlin and Amelie Rorty. Berkeley and Los Angeles: University of California Press.

Leite, Adam. 2008. “Believing One’s Reasons Are Good.” Synthese 161 (3): 419–41.

Oaksford, Mike, and Nick Chater. 2007. Bayesian Rationality: The Probabilistic Approach to Human Reasoning. Oxford: Oxford University Press.

Peacocke, Christopher. 1995. A Study of Concepts. Cambridge, Mass.: A Bradford Book.

Pollard, P. 1981. “The Effect of Thematic Content on the ‘Wason Selection Task.’” Current Psychological Research 1 (1): 21–29.

Railton, Peter. 2006. “How to Engage Reason: The Problem of Regress.” In Engaging Reason: Themes from the Moral Philosophy of Joseph Raz, edited by Ray Wallace, Philip Pettit, Stephen Scheffler, and Michael Smith. Oxford: Oxford University Press.

Rumfitt, Ian. 2008. “Knowledge by Deduction.” Grazer Philosophische Studien 77 (1): 61–84.

———. 2011. “Inference, Deduction, Logic.” In Knowing How: Essays on Knowledge, Mind, and Action, edited by John Bengson and Marc A. Moffett, 334. Oxford University Press, Usa.

Stalnaker, Robert C. 1987. Inquiry. Cambridge, Mass.: A Bradford Book.

Stanley, Jason. 2011. Know How. Oxford University Press.

Stenning, Keith, and Jon Oberlander. 1993. “Nonsentential Representations and Nonformality.” Behavioral and Brain Sciences 16 (02): 365–66.

Ter Meulen, Alice. 1993. “Situation Theory and Mental Models.” Behavioral and Brain Sciences 16 (02): 358–59.

Tucker, Chris. 2012. “Movin’ on up: Higher-Level Requirements and Inferential Justification.” Philosophical Studies 157 (3): 323–40.

Valaris, Markos. 2014. “Reasoning and Regress.” Mind 123 (489): 101–27.

Van Cleve, James. 1984. “Reliability, Justification, and the Problem of Induction.” Midwest Studies In Philosophy 9 (1): 555–67.

Wason, P. C. 1968. “Reasoning about a Rule.” Quarterly Journal of Experimental Psychology 20 (3): 273–81.

Wason, P. C., and Diana Shapiro. 1971. “Natural and Contrived Experience in a Reasoning Problem.” Quarterly Journal of Experimental Psychology 23 (1): 63–71.

Wedgwood, Ralph. 2002. “Internalism Explained.” Philosophy and Phenomenological Research 65 (2): 349–69.

———. 2006. “The Normative Force of Reasoning.” Noûs 40 (4): 660–86.

———. 2007. The Nature of Normativity. Oxford University Press.

Williamson, Timothy. 2003. “Blind Reasoning.” Aristotelian Society Supplementary Volume 77 (1): 249–93.

———. 2008. The Philosophy of Philosophy. 1 edition. Malden, MA: Wiley-Blackwell.

Winters, Barbara. 1983. “Inferring.” Philosophical Studies 44: 201–20.

Wright, Crispin. 2004. “On Epistemic Entitlement (I): Warrant for Nothing (and Foundations for Free)?” Aristotelian Society Supplementary Volumes 79: 168–211.

———. 2014. “Comment on Paul Boghossian, ‘What Is Inference.’” Philosophical Studies 169 (1): 27–37.


Notes

[1] My attention here is restricted to reasoning with non-graded attitudes, such as knowledge and (full) belief. This choice is controversial, as some theorists argue that virtually all human reasoning involves graded doxastic states, or credences. I cannot enter this debate now, but it is worth noting that one reason for taking this attitude, namely that much of human reasoning is non-deductive in nature, has no force against the position to be argued for here: the approach I will sketch has no trouble accommodating non-deductive reasoning.

[2] What does it mean to say that reasoning, or any other activity, fundamentally consists in Φ-ing? Consider what the activity of playing basketball consists in. One might answer this question on many different levels, including the anatomic/physiological level, the level of individual movements, and the level of strategy and tactics. But there is a sense in which more fundamental than all of those is an abstract specification of what the game is all about: roughly, two teams competing against each other, scoring points by getting the ball through hoops mounted on poles. This level of description is fundamental in the sense that descriptions at all other levels are intelligible by reference to this one: they are further specifications of how one does what is specified at this level of description. This is the sense in which, according to the rule-following model, reasoning is fundamentally a matter of following rules.

[3] This is not to say that it has gone entirely unchallenged. Ian Rumfitt (2008; 2011), for one, proposes an alternative that is in many ways similar to my own.

[4] The “mental models” theory (Johnson-Laird 1983; 2001; 2008; Johnson-Laird and Byrne 1991) in the psychology of reasoning is also often advertised as “semantic”. However, care is needed in interpreting this claim, as pointed out by a number of participants in an Open Peer commentary in Behavioral and Brain Sciences (Andrews 1993; Bundy 1993; Stenning and Oberlander 1993; ter Meulen 1993). This is because mental models themselves are, no less than sentences in a language of thought, syntactic objects, and the elaborate rules Byrne and Johnson-Laird describe for their manipulation are similarly purely formal as well. Discussing the mental model theory in detail goes beyond the scope of this paper. For present purposes, I just want to note that my aim is to answer a rather different question from the one that mental model theory aims to answer: my concern is what you do when you reason, rather than how reasoning is carried out at the computational level. Of course, the two questions are not simply independent of each other: an account of what we do when we reason must be sensitive to much of the same empirical data as an account of how reasoning is carried out at the computational level, while the latter sort of account can benefit from a clearer conceptual characterization of the phenomenon it seeks to explain.

[5] Perhaps it is possible to capture this difference without Frege’s condition. Crispin Wright (2014, 33), for instance, acknowledges the need to “save the idea of inference as something that we do”, while rejecting Frege’s condition. He suggests, instead, that we should think of inference as a kind of intentional action. Wright, however, does not explain how he proposes to think of intentional action. This matters, because according to a familiar tradition in action theory, intentional actions are characterized by the agent’s ability to answer the reason-seeking question “why?” (Anscombe 1957). If this is correct, then Wright’s suggestion takes us back to Frege’s condition.

[6] One might argue that Tom’s theoretical beliefs should be construed as extra premisses in his reasoning. In that case the problem with Tom’s reasoning would be that he is not justified in believing his premisses, rather than that he is not justified in taking his premisses to support his conclusion. I have no particular interest in defending this specific example, but the broader point should be resisted: there is an important distinction between premisses and background knowledge in reasoning, which the present objection would threaten to collapse.

[7] Tucker (2012, 338) acknowledges the intuition that in cases like Tom’s the subject’s conclusion is unjustified, but he recommends simply setting it aside. Tucker gives no direct argument for this, other than that it is required by his own positive view. I suggest we do better keeping the intuition and rejecting the aspects of Tucker’s view that conflict with it.

[8] The history of this argument traces back at least to Lewis Carroll’s (1895) story of Achilles and the Tortoise. Variations are given by Winters (1983), Van Cleve (1984), Johnston (1988), Brewer (1995), Fumerton (1995), Boghossian (2003; 2008; 2014), Railton (2006), Broome (2013), Wedgwood (2006), and others. For some replies, see Leite (2008) and Valaris (2014).

[9] Boghossian (2008, 498–9; 2014, 14–5) makes a similar point. Some authors (Peacocke 1995; Boghossian 2003; Wedgwood 2007) suggest that possession of certain concepts partially consists in dispositions to reason in certain ways. However, even if such dispositionalism is correct as an account of concept-possession (for arguments against see Williamson [2003; 2008]), it still does not give us the sort of explanation that we need here. Unless it is by way of explaining why belief patterns that conform to (MP) appear rational from our subject’s point of view, it is hard to see how a subject’s grasp of the conditional could explain her disposition to infer in accordance with (MP) in a satisfying way.

[10] Using incomplete or even impossible worlds to model deductive ignorance is a relatively familiar, though controversial, approach. See, for example, Ian Rumfitt (2008) and Mark Jago (2014). Any account of reasoning will need some way to represent deductive ignorance, and this approach seems like a natural alternative to syntactical ones.

[11] Much of the evidence for such “content effects” comes from research with the Wason selection task paradigm. For example, subjects perform much better with versions of the Wason selection task (Wason 1968) which are about familiar topics than with versions that involve either meaningless symbols or unfamiliar content (Wason and Shapiro 1971; Pollard 1981). Such content effects show that our reasoning capacities are, at the very least, not purely formal — they are not insulated from background knowledge processes of semantic evaluation. Notice that the Bayesian program in the psychology of reasoning (Oaksford and Chater 2007), while rather different from the proposal sketched above in that it is set in a probabilistic framework, is also explicitly inimical to the formality of the rule-following approach: background knowledge and semantic understanding are obviously essential to it.

[12] Boghossian (2014, 12) illustrates the generality of reasoning through an example of modus ponens with propositions drawn from general relativity. Boghossian’s point is that a typical subject will be able to recognize the validity of the argument even if she has no clue what the premisses or the conclusion mean. What Boghossian overlooks, however, is that subjects are actually likely to find this example much harder than, e.g., his own earlier one involving rain and wet streets, even though the two are formally identical.

12 thoughts on “What Reasoning Might Be”

  1. 1. Markos Valaris’s “What Reasoning Might Be” takes up the question – recently given prominence in work by Paul Boghossian, John Broome, and others – of how to characterize the special kind of connection that is constituted when a subject comes to believe a proposition by inferring it from some set of already-accepted propositions.  It’s a pleasure to comment on Markos’s contribution to this debate, not only because his paper makes so many forceful points in such a short space, but because the author is an old friend.

    I find myself in extensive agreement with the critical part of Markos’s discussion.  I share his skepticism about attempts to clarify the nature of inference by appeal to the idea of rule-following, and I agree that an account of inference should not require the subject to think about relations between her own attitudes.  I found Markos’s comments on these topics very illuminating, but I will say little about them.  My focus will be on his positive proposals.  Here I am less persuaded, and naturally this leaves me with more to say.

    2. Let me begin by summarizing the problem of inference as I understand it.  Suppose a subject S draws a simple deductive inference such as:

    (1) It is raining.

    (2) When it rains, the streets get wet.

    So (3) The streets are getting wet.

    What sort of relation must there be between S’s beliefs in (1), (2), and (3) for her to count as having inferred (3) from (1) and (2)?

    It obviously does not suffice for S first to believe (1) and (2) and then to come to believe (3): she must come to believe (3) because she believes (1) and (2).  But not just any “because” will do: a person’s believing (1) and (2) might cause her to believe (3) in any number of “deviant” ways, and might thereby bring her to hold a belief that is in fact rationalized by (1) and (2), but all this might occur in a way without her seeing in (1) and (2) a reason to accept (3).

    Recognizing this, and seeking to formulate a condition corresponding to what is missing in deviant cases, we might propose the following “Taking Condition” on inference:

    p style=”margin-left:2em;”>(TC) A subject S who infers some conclusion C from some set of premises ∏ must come to believe C precisely because she takes ∏ to support C.1

    (TC) is intuitively attractive: it seems to capture the psychological significance of the “So” that connects the subject’s conclusion to her premises.  A subject who draws a personal level inference does not just move automatically from certain extant beliefs to a further belief; she makes this transition in virtue of some (purported) insight into the connection between her premises and her conclusion.  (TC) simply spells out the content of this insight.

    It is not immediately clear, however, what “taking” one’s premises to support one’s conclusion can amount to.  Indeed, when we try to clarify this idea, we appear to confront a dilemma.  On the one hand, if (TC) is interpreted as requiring that S believe

    (4) (1) and (2) support (3).

    then we must be prepared to explain what role this further belief plays in bringing about her transition from believing (1) and (2) to believing (3).  The requirement cannot be that (4) should figure as a further premise of S’s reasoning, for then (TC) will also require her to take it that:

    (5) (1) and (2) and (4) support (3).

    And now (5) too will need to be added to her premises, and we will be on our way to the sort of hopeless regress made famous by Lewis Carroll.  But if (4) does not figure as a further premise in S’s reasoning, how does her believing (4) contribute to her coming to believe (3), and to her being reasonable in doing so?  In denying that the subject’s belief in (4) supplies a further premise for her inference, we set aside the only familiar and well-understood model of how beliefs function in generating rational thought and action, and thus relinquish whatever gain in understanding the interpretation of takings as beliefs was supposed to provide.

    Moreover, we should remember that we introduced (TC) to capture what is missing from deviant forms of connection between S’s believing (1) and (2) and her believing (3).  If the “taking” mentioned in (TC) is just another belief, it is hard to see how it could accomplish this.  If S’s believing (1) and (2) can deviantly cause her to believe (3), so too, presumably, can S’s believing (1), (2), and (4).  So even if S’s belief in (4) does not function as a premise in her reasoning, it is not clear how adding it to the background of her inference helps to solve our problem.

    But on the other hand, if we reject the identification of taking with believing, how are we to understand it?  The intuition underlying (TC) was that, in a genuine inference, the subject’s coming to believe her conclusion must occur, not just automatically, but in virtue of her having some (purported) insight into how it is supported by her premises.   But if we eliminate the doxastic element in our interpretation of (TC), it is not clear what is left for us to make of this idea.  S may presumably be disposed to believe (3) in the presence of beliefs (1) and (2) without having any insight into the connection.  (TC) says that the mere operation of such a disposition does not suffice for inference; S’s coming to believe (3) must reflect her having some understanding of the relation between (1), (2), and (3).  But what can this mean if not that she holds some belief about this relation?

    I offer this opinionated overview of the problem of inference in order to bring out what, it seems to me, we should want in an account of (TC).  We should want, not just a description of some further cognitive state that, when combined with beliefs (1) and (2), yields belief (3).  We should want an account of how the relevant state can have this curiously liminal character: something more than a blind disposition to proceed from certain premises to certain conclusions, but something less – or anyway other – than a further belief on which inference is based.

    3. Now, like me, Markos accepts (TC) as a requirement on inference, and in the positive part of his paper, he seeks to give an account of what it is to take one’s premises to support one’s conclusion.  But unlike me, he does not see a problem with interpreting taking as belief (p. 6).  Indeed, in other work (Valaris 2014), he has argued that a subject who draws a deductive inference must believe her conclusion follows from her premises, and that this belief plays not a causal but a “constitutive” role in her inference.  His case for this position is complex, and there is not space to examine it here.  I will just raise some doubts about his proposal in the present paper about what the relevant belief might be, and then draw one moral about what we should want in an account of (TC).

    Markos’s proposal is as follows.  First, we may in general think of believing a proposition as a matter of ruling out possibilities recognized as incompatible with it (where the “possibilities” to be considered can include incompletely characterized states of affairs, to model ways in which a subject may lack insight into the consequences of propositions she accepts).  It is crucial to this picture that understanding a proposition rests on a basic “cognitive skill” in recognizing whether given possibilities are ruled out – a cognitive skill whose exercise does not, at least in the fundamental case, require drawing inferences (cf. pp. 13-14).

    Given this background, Markos suggests, we can simply identify a subject’s taking a conclusion C to follow from a set of premises ∏ with her believing that C follows from ∏, where believing this consists in “ruling out all ways for things to be in which all members of [∏] are true but [C] is not” (p. 15).2  Now, Markos argues, a subject who believes this, and who believes all members of ∏, will necessarily rule out all possibilities in which C is false, so she will count as believing C.  Moreover, the subject will believe C because she takes it to follow from ∏, in the following sense: given her belief in the members of ∏, her coming to believe C follows from ∏ will constitute her coming to believe C.  In this way, Markos’s proposal would neatly clarify the role that taking one’s conclusion to follow from one’s premises plays in inference, and would vindicate (TC).

    4. Elegant as these proposals are, I have a few lingering misgivings about them.

    First, as Markos acknowledges, his account appeals liberally to a sophisticated cognitive skill: the ability to recognize what possibilities are ruled out by a given proposition.  The worry about this is not simply that this skill is “no less in need of explanation” than the capacity to draw inferences (p. 17), but that it is natural to think our ability to recognize what a proposition rules out depends on the very same kind of insight into rational relations that underlies inference.  Suppose I am trying to identify an approaching figure who looks for a moment like my friend Kendra, but then I remember that I saw Kendra over my shoulder just a moment ago.  Given how the world works, the proposition that I saw her behind me a moment ago rules out that she is the figure in the distance, but recognizing this plausibly requires drawing an inference: she was just here, so she couldn’t get over there without moving at an unfathomable speed, and people don’t move at such speeds, so it can’t be her.  Of course nothing like this need go through my mind at the moment when I rule out the possibility that the approaching figure is Kendra, but it forms the rational background to my ruling this out, and this background is not opaque to me: I could explain it if queried, and I would not merely offer this explanation as a post hoc rationalization, but as an account of what was all along my reason (i.e., the reason I took to be sufficient) for ruling out this possibility.

    If Markos’s appeal to the cognitive skill of ruling out possibilities incompatible with a given proposition is not to presuppose what is to be explained, the fundamental case of ruling out possibilities must not work like this.  In derivative cases, recognizing a possibility as excluded may involve drawing inferences, but in the theoretically basic case the subject must simply find it primitively obvious that certain possibilities are ruled out.  She might perhaps be able to identify reasons for these exclusions post hoc, but her understanding of such reasons must not be integral to her recognizing these possibilities as excluded.  Markos thinks this is intuitively plausible, and he cites as an example the capacity of an experienced chess player to call to mind possibilities afforded by a certain configuration of pieces “without needing to derive them from the rules of chess” (p. 14).  To me, it seems less clear how to interpret this case.  Of course I grant that an experienced chess player may identify configurations as possible without conscious meditation on the rules.  But however immediate her identification may be, I suppose it rests on her understanding of how the pieces may permissibly move, in such a way that she could explain why the game could unfold this way and not that.  And I do not mean merely that she could construct such explanations post hoc, but that her understanding systematically informs her sense of what is possible, in such a way that the open possibilities do not simply form an unrelated miscellany for her, but are grasped as systematically interconnected in virtue of the rules.  There may be no dateable moment at which she draws an inference from the rules to the possibility of a given configuration, but nevertheless, in identifying a consideration as possible, she takes it to be generable in accordance with the rules.

    And mustn’t something similar hold for recognizing what possibilities are consistent with a given proposition?  It is attractive to think of our capacity to understand propositions as resting on our grasp of a repertoire of concepts, which joins with our comprehension of various forms of logical structure to allow us to comprehend what any given proposition makes possible and what it excludes, not as a sheer miscellany, but as a systematically interconnected field of possibilities.  Our recognition of a given possibility as belonging to this field may be immediate, but this does not show that it is primitive: it might, and plausibly does, depend on our understanding how it fits into the field of possibilities permitted by this proposition, given its structure and content.  This understanding might not be actualized in any dateable event of inferring that such-and-such a possibility is consistent with this proposition, but it rests on the subject’s taking this possibility to be intelligibly related to the content of the proposition.  If this is right, then we cannot appeal to the ability to recognize what a proposition allows and rules out without presupposing the very kind of insight into rational relations between propositions for which we are aiming to account.

    My second area of misgivings concerns Markos’s proposal to identify the belief that C follows from ∏ with the disposition to rule out all ways for things to be in which all members of ∏ are true but C is not.  Does this capture the intuitive idea of taking one’s conclusion to follow from one’s premises?  I see two reasons for doubt.

    First, it is not clear that the proposal captures the intuitive notion of taking something to follow.  Suppose ∏ consists of P and Q.  Then a person who rules out all ways for things to be in which all members of ∏ are true but C is not rules out all possibilities in which:

    P & Q & ~C

    But it seems clear that a person might rule out these possibilities without taking C to follow from P & Q: she might simply take a certain combination of three conditions to be excluded without taking two of these conditions to imply the exclusion of the third.  (This objection parallels a familiar objection to the material conditional as a representation of the intuitive idea of consequence.)  If this is right, then although Markos’s proposal may identify a cognitive state whose presence would ensure that a subject who believes all members of ∏ will also believe C, it does not capture the moment of (purported) rational insight we sought to mark with (TC).

    Might we, then, revise Markos’s proposal to capture what is missing?  One might try to capture the relation the subject sees between the propositions in ∏ and C by stipulating some way in which the subject’s coming to rule out the possibilities excluded by the propositions in ∏ must be causally or counterfactually related to her coming to rule out the possibilities excluded by C.  But this was our just our problem: to characterize the special way in which a subject’s belief in certain premises (and thus her ruling out possibilities excluded by these premises, if that is how belief is interpreted) must be connected with her belief in a conclusion for her to count as having inferred the latter from the former.  If fixing Markos’s proposal requires adding something like this, we are back to square one.

    I think there is also a second, subtler reason to doubt the adequacy of Markos’s proposal: it is not clear that it captures how the subject must think of her premises.  After all, a subject who considers what follows from a set of premises does not merely consider what would be the case if each of her premises (considered severally) were true, but what is implied by all of the premises (considered jointly).  In other words, she considers the rational import of her premises taken together in a certain way.  I will represent the object of such consideration by writing

    [P & Q]

    but I do not mean that the subject considers an ordinary conjunctive proposition.  In order to infer the conjunctive proposition P & Q from the two premises P, Q, one must already be capable of considering these two propositions together in the relevant sense.

    Markos’s proposal (if I understand it correctly) does not mark a distinction between believing propositions severally and taking them to hold jointly.  If it did, it would be problematic for him to infer that a subject who believes some set of premises ∏, and who believes that C follows from ∏, thereby believes C.  For suppose again that ∏ consists of P and Q, and suppose S holds the following beliefs:

    (1) P

    (2) Q

    (3) C follows from [P & Q]

    According to Markos’s proposal, this will entail that the subject rules out

    (1’) All possibilities excluded by P

    (2’) All possibilities excluded by Q

    (3’) All possibilities in which [P & Q] but not C

    Does this entail that S will rule out all ways for things to be in which C is not true, and thus will believe C?  The entailment does not seem obvious.  Since, by hypothesis, S might believe both P and Q without recognizing [P & Q], the possibilities she rules out in virtue of separately believing P and Q need not coincide with the possibilities she rules out upon recognizing [P & Q].  Hence even if a subject rules out all possibilities incompatible with each proposition in ∏, and rules out all possibilities in which [P & Q] but not C, it is not obvious that she will thereby rule out all possibilities in which not C.

    Marking the distinction between separately believing P and Q and recognizing that [P & Q] allows us to see that there is a kind of intellectual operation involved simply in considering the premises of an inference that raises problems parallel to the ones raised by the operation of “taking something to follow” from given premises.  Already in drawing together one’s premises “in one consciousness” (as we might put it), one is taking them to hold jointly.  But this taking, like taking something to follow, cannot be just another belief, on pain of its merely adding one more item to the stock of premises whose joint rational significance the subject must consider.

    5. The moral of these objections, I would suggest, is that we cannot easily domesticate the notion of “taking” invoked in (TC) by identifying it with ordinary belief.3  When we try to remove this unstable element from our compound, we either fail to produce the kind of compound we aimed to synthesize, or produce something that contains the problematic element after all.

    Some philosophers might regard this outcome as grounds for rejecting (TC) altogether, but that is not my attitude.  This is not the place to propose an alternative account of taking – and even if this were the place, I do not have one up my sleeve.  But let me conclude by describing my own reasons for thinking that the problem about the nature of “taking” is real and deserves our attention.

    It seems to me that the problem we encounter in trying to understand taking one’s conclusion to follow from one’s premises is really an instance of a ubiquitous problem in the characterization of the lives of rational animals.  Other authors have noted that the problem of characterizing the nature of inference is a close relative of the problems at issue in epistemological debates about “the basing relation” and in action-theoretic debates about “explanatory reasons” (cf. Boghossian 2014, p. 3).  I would go further and suggest that the core of the problem arises wherever we seek to characterize a specifically rational mode of engagement with the world – including domains that have no direct connection with the topic of reasoning per se.

    Consider, for instance, Philippa Foot’s claim that we rational agents differ from nonrational agents inasmuch as nonrational agents pursue ends but do not, like us, apprehend them “as ends” (Foot 2001, p. 54) and Brian O’Shaughnessy’s claim that while nonrational animals know truths, they do not, like us, know them “under the aspect of truth” (O’Shaughnessy 2003, p. 111).  These are two recent attempts, in different domains, to mark a distinction between rational and nonrational modes of activity.4  In neither case is the distinction directly connected with the topic of reasoning.  Nevertheless, there is a characteristic shape to each claim that resembles the shape of (TC).  We can bring this out by formulating each claim using the language of “taking”: reason-governed inference requires, not just moving from certain extant beliefs to (what is in fact) a further belief supported by them, but taking one’s premises to support one’s conclusion; rational action requires, not just pursuing (what are in fact) desired ends, but taking the objects of one’s pursuit to be desirable; and rational cognition requires, not just registering (what are in fact) truths, but taking the things one believes to be true.  In each case, the underlying intuition is that a rational animal lives its life, not just blindly or automatically, but with a certain insight.  But whatever this insight amounts to, it is implausible to think that it consists of the subject’s holding a further belief (that this end is desirable, that this proposition is true, that this conclusion follows from these premises).

    The implausibility of identifying rational insight with a further belief can be brought out in various ways: for instance, by emphasizing that believing these propositions requires possessing concepts not plausibly possessed by unsophisticated but nevertheless rational subjects, or by showing that this way of characterizing rational insight would lead to a regress (e.g., a situation in which a subject must hold, concerning each proposition she believes, an infinite hierarchy of truth-ascribing beliefs).  But these are just subtle ways of bringing out what is really a simple flaw.  The point of saying that rational animals regard their ends “as ends”, regard their beliefs “as true”, and regard their conclusions “as following” from their premises, is not to identify some further content that rational animals endorse, but to gesture toward something distinctive about the manner in which they endorse any content whatsoever: they do so, as it were, comprehendingly.  Using an adverb to predicate the relevant comprehension marks – although it certainly does not clarify – the fundamental mode of rational insight: it does not consist in a further thought, but in something about one’s manner of thinking itself.  As I see it, the task we face – a task the critical parts of Markos’s paper make all the more evident and acute – is to understand what form a clarification of this implicit insight could take.

    References

    Boghossian, Paul.  2014.  “What Is Inference?”  Philosophical Studies 169: 1-18.

    Foot, Philippa.  2001.  Natural Goodness.  Oxford: Oxford University Press.

    Lavin, Douglas.  2011.  “Problems of Intellectualism: Raz on Reason and Its Objects.”  Jurisprudence 2 (2): 367-78.

    O’Shaughnessy, Brian.  2003.  Consciousness and the World.  Oxford: Oxford University Press.

    Valaris, Markos.  2014.  “Reasoning and Regress.”  Mind 123 (489): 101-127.

     
     


    Notes

    1. I owe the term “Taking Condition”, and several points in the exposition that follows, to Boghossian 2014. Some philosophers would, of course, reject (TC), but I will take it for granted, since it is a point on which Markos and I agree.

    2. In his initial presentation, Markos focuses on the case in which the subject takes her conclusion, not just to be supported by her premises, but to follow deductively from them. I will do likewise in what follows.

    3. This is not to deny that a subject who takes C to follow from ∏ may, on reflection, form the belief that C follows from ∏. A subject who possesses sophisticated concepts like __ follows from __ can certainly, if she considers the matter, frame a belief articulating what she takes to follow. But if, as seems plausible, a subject can draw inferences without possessing such sophisticated concepts, and if drawing an inference nevertheless involves taking one’s conclusion to follow (in the sense expressed by the characteristic “So” we use to mark an inference), then a subject can take C to follow from ∏ without believing that C follows from ∏. Moreover, even if she does hold this belief, it is not what brings about her conclusion. (Drawing this distinction would, I think, allow us to acknowledge what is right in the position advocated in Valaris 2014 while rejecting the identification of taking with belief.)

    4. These examples are cited in a related context by Doug Lavin (2011, p. 377), to whom I am indebted for many conversations about these topics.

  2. Cutting at the Joints of Reasoning

    1. Introduction

    In “What Reasoning Might Be”, Valaris argues that reasoning does not fundamentally consist of rule-following, contra Boghossian (2014), Broome (2013) and others in the recent literature on the nature of inference. Valaris argues that the rule-following view cannot appropriately capture how reasoning reflects an individual’s take on her evidence (Frege’s Condition). He offers instead a “semantic model” of reasoning, according to which reasoning fundamentally consists in the consideration of possibilities. On this view, what we are doing when we move from premises to a conclusion is narrowing the set of possible worlds to those in which the conclusion statement is true.

    In what follows, I’ll offer a brief response to Valaris’s criticism of the rule-following model, but focus mainly on his own alternative positive proposal. I’ll raise some questions about whether Valaris’s semantic model of reasoning cuts at proper joints, in its ability to adequately differentiate between 1) representing a set of premises that logically entail a conclusion and actually drawing that conclusion, and 2) reasoning and other types of mental transitions.

    1. The Rule-Following Model

    Valaris’s core objection to the rule-following model of inference is that the rules that are followed cannot play the role of instructing the subject on what beliefs to form, while also meeting Frege’s Condition. Valaris summarizes Frege’s condition as “that inferring p from a set of premisses R requires taking R to provide justification or support for p, and coming to believe p (partly) because of this” (4). Valaris accepts that any adequate account of reasoning must accommodate this condition, as do most other players in the recent literature on reasoning (Boghossian 2014, Broome 2013, Chudnoff 2014).

    Valaris argues that meeting Frege’s condition is problematic for any rule-following model because the rules that mediate reasoning cannot be first-order rules about the relations between propositions (of the form ‘If P and ‘If P then Q’, then Q) because they could not properly instruct us to form the belief Q. He claims that in order for a rule to play the requisite psychological role of guiding our process of belief formation, it must be a higher-order rule about our own beliefs (such as, ‘If I believe P and I believe ‘If P then Q’, then I ought to also believe Q’) (11). Yet Valaris claims that higher-order rules cannot properly reflect our appreciation of evidential support relations between things in the world, as Frege’s Condition (as well as our intuitive conception of the subject matter of reasoning) requires. He takes this criticism to be problematic for all versions of the rule-following model.

    Granting that we want the ultimate subject matter of reasoning to be things in the world, not our own beliefs, it seems like the higher-order rule following account leaves open a few ways in which this can be achieved. One way is that even if the epistemic support relations that we appreciate in reasoning are between our belief contents, the conclusion beliefs that we end up with will certainly be about the world, no less than the initial premise beliefs we started out with are. The idea that the subject matter of the premises and conclusions involved in reasoning is the world is not contested by a picture that involves higher-order rules. For Valaris’s objection to go through, he would have to take on board the stronger claim that the subject matter of the inferential rules that mediate reasoning is also the world, and this is at least less intuitively obvious. Perhaps it suffices for our conclusions to be about the external world, and for our inference rules to about the relations between our own beliefs.

    Even holding onto the stronger requirement that our inference rules must be about relations in the external world, though, it seems possible that we could appreciate relations between states of affairs in the world indirectly, by appreciating the rational relations between our own beliefs. An individual who follows the higher-order rule, ‘If I believe P and I believe ‘If P then Q’, then I ought to also believe Q’ can be seen as taking P and ‘If P then Q’ to provide support for Q. Her appreciation of the epistemic support between the premises and the conclusion is manifested in the patterns of belief formation that she takes to be mandated by the premises, and so she satisfies Frege’s condition. Valaris must hold that such options for how reasoning by following higher-order rules can have the world as its subject matter, but his arguments do not seem to have decisively ruled them out.

    1. Valaris’s Semantic Model

    Following his critique of the rule-following model of reasoning, Valaris suggests that instead we adopt a “semantic model”, according to which reasoning fundamentally consists in consideration of possibilities (18). Taking a Stalnakarian approach (Stalnaker 1987), Valaris claims that to understand a given statement is to be able to pick out the set of possibilities in which that statement is true (13). To acquire a new belief in a statement is to rule out the possibilities in which the statement is false (12). Reasoning, on this view, is the processes of narrowing the set of possible ways the world could be.

    The main worry I have with Valaris’s proposal is that it does not adequately cut at the joints of reasoning, in terms of differentiating between the different types of mental states and processes involved.

    The first way it does not cut at the joints is in differentiating between endorsing a set of premise beliefs that have a particular logical entailment, and actually drawing the conclusion that is entailed. Quite often, we believe P and ‘If P then Q’, but fail to believe Q, perhaps because we never attend to the logical relation between these statements, or because we are emotionally invested in believing ‘~Q’, or for any number of ordinary factors that undermine our rational consistency. This can happen even when we fully understand how modus ponens-style reasoning works in general, and even when we fully understand each relevant statement, because our beliefs may be encapsulated from each other. For example, consider Lewis’s (1982) case of conflicting directional beliefs, in which he believes the inconsistent triad of 1) Nassau Street runs east-west, 2) the railroad tracks run north-south, and 3) Nassau Street and the railroad tracks are parallel. The possibility of this sort of case clearly illustrates that our beliefs may be fragmented into different subsystems that are internally consistent but mutually inconsistent. We can also have consistent beliefs that are fragmented in a similar way, in that we simply fail to draw the conclusions that would follow from them when held jointly.

    We need an account of reasoning that explains this feature of our cognitive systems. The semantic model, as Valaris sketches it, does not have the resources on its own to do this. He writes, “coming to believe that p follows from R just is coming to believe p, since it consists in ruling out all ways for things to be in which all members of R are true and p is not” (15). When we consider one individual who believes P and ‘If P then Q’ but has not concluded Q, and a second individual who believes these two premises and has concluded Q, so long as both individuals understand each of these statements fully, the set of possibilities that they each take to be open will be the same.

    Valaris does acknowledge the need to explain the difference between these sorts of individuals (16). He appeals to the idea of noticing logical structures of the space of possibilities to explain the cognitive accomplishment that the second individual has achieved that the first has not. It is not clear how this “noticing” is to be mapped in terms of differences in their beliefs on the semantic model, though, and even if it were to be fleshed out, it seems peripheral to the main story that the semantic model gives of how inference fundamentally consists in the narrowing of possibilities. No additional possibilities are ruled out by the individual who also believes Q. Some role for application of logical rules must come in, in order to explain the particular mental movement from merely holding the set of premise beliefs to also holding a logically entailed conclusion belief. Valaris wants to siphon off such changes in belief to differences in the “structure of our commitments” (17), but the conclusion-drawing step of inference seems to be the main part of what we are concerned with when we investigate the nature of reasoning, and so an explanation of it should be at the core of any satisfactory account. If this is the stage at which logical rules must be appealed to, it seems that logical rules are what is really at the heart of inference after all.

    The second way that Valaris’s semantic model does not adequately cut at the joints of reasoning is in differentiating between inferential reasoning and other types of mental transitions. Reasoning is one way our beliefs can change, but they can also change by association, or by external stimulation of a particular set of neurons, or by an accidental bonk on the head. The semantic model gives a clear way of describing what belief change is, but it seems to be silent as to the mechanism by which such change occurs. Consider two people who both believe ‘Rover is the name of Sally’s pet’ and ‘Sally’s pet is a dog’. The first person integrates these two beliefs and forms the belief ‘Rover is a dog’ on their basis. The second person fails to integrate these beliefs, but she happens to have an association between the name ‘Rover’ and the concept DOG, and so she also forms the belief ‘Rover is a dog’. We want to be able to say that there is both a psychological and an epistemic difference between these two individuals. Their beliefs are formed through different types of psychological processes, which in turn determine their differing degrees of justification (presumably the first is more justified than the second). Yet the semantic model would count both of these processes of belief formation as a narrowing of possibilities, eliminating the ways the world could be in which Rover is not a dog. The semantic model simply maps the changes in one’s overall mental state, and does not explain the mechanisms by which individual beliefs interact, leaving important differences unexplained. The rule following account does better on this front, because it can say that in cases of reasoning, the mechanism by which we change our beliefs is following a rule, setting it apart from belief change by associative triggering or brute physical causes (for more on this point see Quilty-Dunn and Mandelbaum).

    Valaris’s semantic model is grounded in a particular account of what it is to hold a belief, and thereby provides us with a useful way of thinking about what happens when our beliefs change. But a theory of reasoning is more than a theory of belief change. A complete theory of the nature of reasoning owes us an account of differences between structures of belief formation in order to be useful for psychology and epistemology.

     
    References

    Boghossian, P. (2014) “What is inference?” Philosophical Studies 169 (1): 1–18.

    Boghossian, P. (manuscript) “Reflecting on reflections: Comments on Kornblith”.

    Broome, J. (2013) Rationality Through Reasoning. Chichester: Wiley Blackwell.

    Chudnoff, E. (2014) “The rational roles of intuitions” in A. Booth and D. Rowbottom (eds.), Intuitions. Oxford: OUP, 9–35.

    Lewis, D. (1982). “Logic for equivocators” Nous 16, 431–441.

    Quilty-Dunn, J. & Mandelbaum, E. (manuscript). “Inferential transitions”.

    Stalnaker, R. (1987) Inquiry. Cambridge, MA: Bradford Books.

    Valaris, M. (manuscript) “What reasoning might be”.

  3. I enjoyed reading and thinking about Valaris’ paper.  I’m sympathetic with many of its central points.  Valaris holds (i) that inference is a person-level act1 (ii) that requires the agent to take the premises to support the conclusion.  Valaris follows Boghossian in referring to (ii) as Frege’s Condition.2  I’ve explicitly endorsed Frege’s Condition in past work (Tucker 2010: 518-9, 2012: 333-4).  I’m also happy to endorse (i) (cf. Tucker 2010: 501-2), provided that we don’t get too fussy about what it takes for a person-level transition to count as an act.  I’ll assume both (i) and (ii) for the purpose of these comments.  The main point of Valaris’ paper is to motivate a semantic account of inference that can vindicate Frege’s Condition.

    In section 1, I raise some worries about the details of the semantic account.  These worries are intended more as a request for information than as a devastating objection.  In section 2, I criticize Valaris’ argument that the relevant kind of taking must involve a state that is assessable for epistemc justification.  Along the way, I’ll raise a worry about the account of evidential support that seems built into the semantic account of inference.

    1. Valaris’ Semantic Account of Inference

    Consider the following account of inference:

    Simple Account: S infers proposition P2 from proposition P1 iff S believes P2 because both (i) S believes P1 and (ii) S recognizes that P1 supports P2.

    Note that this account makes no appeal to rule-following, and that condition (ii) captures Frege’s condition.  These two features will please Valaris, but he may not be satisfied.  Valaris contends that, “when you reason your attention is on the world, not on your beliefs or their contents” (10).  Insofar as the Simple Account requires the subject to recognize whether one proposition supports another proposition, Valaris may worry that the Simple Account makes the reasoner unduly focused on contents rather than the world.  I’m not sure that such a worry would be well-founded.

    Suppose that I make the inference the rose is red, so it is colored.  My kid wonders why I make that inference.  I might tell him:

    “if something is red, then it must be colored.”

    On the one hand, in making such a claim, my attention seems directed at things in the world.  I am thinking about the relationship between redness and coloredness.  Yet my attention is directed at the world by thinking with propositions.  I use the propositions something is red and something is colored, and I attribute a certain relationship to those propositions, namely that the former’s truth guarantees the latter’s truth.  I’m tentatively inclined to think, then, that the Simple Account should be sufficiently world directed to satisfy Valaris.

    I worry that the simple account will suffer from problems pertaining to causal deviance,3 so I’m not defending it.  I mention it, because it’s unclear to me how Valaris’ account relates to this simple one.  Here is a hypothesis: Valaris defends one version of the simple account.  His version is distinct from others primarily in two respects.  First, he offers the following account of evidential support: P supports Q iff either P’s truth rules out all possibilities but Q or P’s truth rules out all possibilities but Q given the assumption of normality (15-16).  Second, he offers a specific account of how we recognize evidential support relations which involves considering and ruling out possibilities (13-5).

    If this hypothesis is correct, then the real work is done by Valaris’ endorsement of the Simple Account.  The further details only make Valaris’ account unnecessarily tendentious.  Later I’ll explain why I’m concerned about Valaris’ account of evidential support.  Here I’ll explain why I think Valaris gets a little carried away in his attempt to capture the Frege condition.  He says,

    suppose you already believe R, and hence all ways for things to be that are open to you are such that all members of R are true. In this context, coming to believe ˹that p follows from R˺ just is coming to believe p, since [believing ˹that p follows from R˺] consists in ruling out all ways for things to be in which all members of R are true and p is not — which, in the present context, means ruling out all possibilities still open for you in which p is not true.4 (Valaris 15)

    If I understand him (and I’m not sure I do), he is claiming something like this:

    Sufficiency: necessarily, if one believes both R and that P follows from R, then one believes P.

    I reject Sufficiency for familiar reasons.  On a natural construal of lottery and preface paradoxes, one believes each of many conjuncts and believes that the truth of the conjunction follows from the truth of the individual conjuncts.  Nonetheless, the subject fails to believe the entire conjunction.  More generally, paradoxes typically involve us believing each of the premises, seeing that those premises entail a certain conclusion without us believing the paradoxical conclusion.  We may eventually give up belief in one of the premises as a result of seeing the paradox and engaging in philosophical reflection, but our initial state is one of recognized incoherence: we reject the conclusion, even though we see it follows from other things we believe.

    Valaris relies on Sufficiency in his attempt to capture Frege’s condition.  I think Sufficiency is false.  If I’m correct, I don’t think that Valaris’ account suffers from any deep problem.  But it does press the question of what real work all the talk of ruling out possibilities is doing for Valaris.

    2. Evidence and the Nature of Taking

    Valaris argues that the taking involved in Frege’s Condition must consist in a belief, or at least some kind of mental state “that can be assessed for epistemic justification” (6).  For simplicity, I ignore the possibility that there are other states that are assessable for justification besides belief.  I think the argument fails.  Here is the argument:

    Consider the following case. Tom has some irrational theoretical beliefs. For example, he believes that certain spots on people’s faces indicate that they have been marked by a demon, and once so marked they will soon die. Tom sees such spots on Bob’s face. As a result of his theoretical beliefs, he takes it that the spots on Bob’s face is evidence that Bob will soon die. As it happens, the spots on Bob’s face are a sign of advanced disease, and so their presence does in fact indicate that Bob will soon die. And yet Tom’s belief that Bob will soon die is not, intuitively, justified, no matter how reliable a sign of impending death the spots might be. This is because, although the presence of the spots on Bob’s face does support the conclusion that he will soon die, Tom (in light of his irrational theoretical beliefs) is not justified in taking them to support this conclusion. Thus the takings required by Frege’s condition must exemplify states that can be assessed for epistemic justification. (Valaris ms: 5-6, emphasis original)

    I agree wholeheartedly with Valaris that Tom is not justified in believing the conclusion that Bob will soon die.5  Yet I reject both his diagnosis and its supporting assumption.  Valaris’ diagnosis of why Tom isn’t justified in believing his conclusion is that Tom “is not justified in taking [his premises] to support this conclusion” (Valaris 5-6, emphasis removed).  Here Valaris seems to impose the following requirement on inferential justification:

    HLJB: S’s inference from E to P can justify S’s belief that P only if S has a justified higher-level belief to the effect that E supports P. (cf. HLJB from my 2012: 324-5)

    In my 2012, I argue that HLJB is false (336-7).  The argument appeals to intuitions about cases, the same sort of strategy Valaris uses in his argument for HLJB.  Here is a short-ish version of my argument.  Let E be the conjunction ~Q and if ~P, then Q.  Now suppose that S is acquainted with E’s entailing P or that he has a veridical intuition that E entails P.  If it makes a difference, you can also assume that the intuition is caused by some reliable and properly functioning mechanism.  Finally, suppose that together S’s justified belief in E and S’s acquaintance with E’s supporting P (alternatively: S’ intuition that E supports P) non-deviantly causes S’s belief that P.6  Regardless of whether S bothers to form the belief that E supports P, I say both (i) that S has inferred P from E and (ii) that, in the absence of defeaters, this inference justifies his belief that P.  In such a case, S has awareness of the evidential connection that plausibly provides him with non-inferential propositional justification that the evidential connection obtains.7  This awareness of the evidential connection explains why the subject transitioned from belief in E to belief in P.  Why would believing that E supports P be so crucial that, without such a belief, the transition from believing E to believing P could neither count as an inference nor justify the belief that P?  I haven’t the slightest idea.

    So I think we have some positive reason to reject both that taking requires believing that E supports P and that such a belief is required for inferential justification.   Yet we are still left with an important question: why isn’t Tom justified in believing his conclusion that Bob will soon die?  I say the problem is that Tom’s premise fails to provide evidence for his conclusion.  Here is Tom’s argument:

    1. Bob has a certain type of spot on his face.
    2. Therefore, Bob will die soon.

    Bob’s theoretical beliefs about demons do not serve as a premise.  As Valaris sets up the case, these theoretical beliefs constitute Bob’s taking the spots to support imminent death (5-6).  Valaris claims that, by itself, 1 supports 2:

    As it happens, the spots on Bob’s face are a sign of advanced disease, and so their presence does in fact indicate that Bob will die soon….the presence of spots on Bob’s face does support the conclusion that he will soon die. (5)

    Since Valaris transitions from claims about indication to claims about support, he seems to think that E supports P iff E reliably indicates P.  What Valaris says about ruling out possibilities on pgs 15-6 also suggest a (normal worlds) reliabilist account of evidential support. Lots of people assume that some sort of reliabilist account of evidential support is true, but I’m not sure they’ve thought through the consequences of this view.

    In my 2014, I argue that such views have many implausible implications about what supports what.  The details will vary a bit depending on what type of reliability is at issue, but here are two quick examples:

    Necessary Propositions: every proposition reliably indicates every necessary truth.  A reliability account, then, holds that I had cereal this morning supports the claim that Gödel’s First Incompleteness Theorem is true.  Yet it’s false that every proposition supports every necessary truth.

    Similarity-Constituting Truths: a proposition is similarity-constituting iff it is true in all the most similar worlds.  Any proposition true in one of the most similar worlds will reliably indicate, and so support, all similarity constituting truths.  Take my belief that coffee was discovered in Ethiopia.  According to reliability theories, this belief will support the proposition identifying all and only laws of nature, the proposition identifying precisely how many miracles occur, the proposition that I do not win a Grammy, and that there existed humans on June 20th, 2000.  Yet my belief about coffee’s origin doesn’t support any of these claims.

    Valaris seems to endorse a normal worlds reliability conception of evidential support: if E’s truth plus the assumption of normality guarantees P, then E supports P.  To make your own counterexample to this account, just figure out what contingent truths are true in all normal worlds—call these propositions normalcy-constituting—and then any proposition true in some normal world will support all normalcy-constituting propositions.

    See my 2014 for additional counterintuitive implications and for further development and defense of this basic worry.  For the purposes of these comments, I’ll just say that I find it implausible that 1 supports 2 all by itself and that, to whatever extent Valaris is relying on a reliabilist account of evidential support, his account of evidence is highly problematic.

    Of course, someone might insist that Tom’s belief that

    1.5. spots of the relevant type are marks from a demon that indicate a person will soon die

    is really playing the role of a second premise.  I do think that 1 and 1.5 jointly support the conclusion that Tom will soon die.  But Valaris made it clear that the belief in 1.5 is not justified.  On this re-construal of the case, then, the problem is that an essential premise is not justified.  An inference justifies its conclusion only if all its essential premises are justified. Plus, if believing 1.5 doesn’t count as taking 1 to support 2, it’s now unclear what satisfies Frege’s Requirment.

    Conclusion

    In section 1, I argued that it’s not clear what work the “ruling out the possibilities” framework is doing that the simple account does not already do.  I worry that the additional framework adds complexity and controversy without cause.  Section 2 concerned Valaris’ argument that taking must involve a state that is assessable for justification, such as a belief.  A key premise in this argument is HLJB, the claim that an inference justifies a belief only if the subject justifiably believes that E supports P.  I argued that HLJB is false and that Valaris’ defense of it fails.  It’s false because you don’t need a belief that E supports P when you are already non-doxastically aware that E supports P and such awareness plausibly gives you non-inferential justification that E supports P.  The defense fails, because it relies on a problematic notion of evidential support.

    References

    Boghossian, Paul. 2014. “What is Inference?” Philosophical Studies 169: 1-18.

    Fumerton, Richard. 1995. Metaepistemology and Skepticism. Boston: Rowman and Littlefield Publishers, Inc.

    Korcz, Keith Allen. 2010. “The Epistemic Basing Relation.” Stanford Encyclopedia of Philosophy. Stable URL: http://plato.stanford.edu/entries/basing-epistemic/.

    Tucker, Chris. 2014. “On What Inferentially Justifies What: The Vices of Reliabilism and Proper Functionalism.” Synthese 191: 3311-28.

    _____. 2012. “Movin’ on Up: Higher-Level Requirements and Inferential Justification.” Philosophical Studies 157: 323-40.

    _____. 2010. “When Transmission Fails.” Philosophical Review 119: 497-529.

    Valaris, Markos. Manuscript. “What Reasoning Might Be.”
     
     


    Notes
    1. See Valaris ms: 2-5 for the idea that inference is a person-level act. Boghossian 2014: 2-3 agrees.
    2. Valaris ms: 4-5; Boghossian (2014: 4-5) agrees.
    3. See Korcz (2010, sec 1) for a brief explanation of the causal deviance problem.
    4. The emphasis is in the original, but I added brackets to make it easier to tell what the content of the relevant belief is.
    5. Valaris (ms: 6, nt 7) takes my remarks on (2012: 338) to suggest that I disagree with him on this point. Yet Valaris misapplied what I said on those pages to the case at hand. He also misses the argument that I provide earlier in the paper against his diagnosis.
    6. I had originally assumed away the possibility that taking requires belief (333-4), so this part of the argument is reworded in a way that does not take a stand on whether taking (or inference) requires belief that E supports P.
    7. For justification, Fumerton (1995: 75) would further require that one be acquainted with the thought that E supports P and acquaintance with the correspondence between the thought and the fact. If it makes a difference to your intuitive judgment about the case, build the further acts of acquaintance into the case.

  4. I am truly honoured, and a bit overwhelmed, by receiving three sets of great, thoughtful comments! I have already learned a lot from thinking about these comments, and I am not yet done. I will try my best to keep what follows brief and to the point, but there is a lot to say.

    Let me start with a couple of clarifications that will be important throughout this discussion. My aim in the paper is to sketch an account of reasoning that can meet what Boghossian (2014) calls the “Taking Condition”:

    Inferring p from a set of premisses R requires taking R to support p, and coming to believe p (partly) because of this.

    Here are a couple of things to note:

    ** This is only a necessary condition for reasoning, not a sufficient one. One thing that I have left out, and which Zoe and Matt both raise concerns about, is the need for cognitive integration. In order to reason from a set of premisses R to a conclusion I need somehow to think of all members of R together, “in one consciousness”.  Moreover, as Matt points out, it is a mistake to think of this “bringing together in one consciousness” as conjunction, as conjunction introduction is itself a case of inference (and, as the Preface Paradox shows, not always an innocent one).

    In earlier work (Valaris 2014), I dealt with this by adding that the subjects I was talking about are
    “attentive”. But this of course is a place-holder for a more serious account, which I still do not have. I acknowledge that my account is incomplete in this way, but I am not sure that this detracts from the idea that the Taking Condition is a necessary condition for reasoning.

    ** I should have said more about the relevant notion of “support” in the Taking Condition (and I do so in a newer version of the paper). The notion I have in mind is perhaps best captured by the term rational commitment. This is a familiar notion: you can object to my philosophical views by showing that what I have explicitly accepted commits me to some further, unacceptable, things. So what the Taking Condition really requires is that to count as reasoning from R to p, you must (among other things) take it that committing to R rationally commits you to p.

    Can we say a bit more about rational commitment? This is one area where my discussion of possibilities is, I think, helpful. The concept of rational commitment is closely related to the concept of epistemic possibility: you are rationally committed to p when there are no ways for things to be that are real epistemic possibilities for you and which make p false. (I do not claim that this is a reductive account of rational commitment: if you don’t grasp the concept of rational commitment you probably don’t grasp the concept of an epistemic possibility either.) On my view, this is not something that you automatically recognize: you could be wrong or ignorant about what really is epistemically possible for you at a given time. This is because I assume that there will be lots of “ways for things to be” which you may think are epistemically open to you, but are not real possibilities.

    This clarification helps me answer a concern raised by Chris. Chris attributes to me a reliabilist view about evidence. I may well have given that impression in the paper, and that is my mistake. But I did not mean to give an account of evidence in the paper at all. The modal framework I sketch is meant to capture the notion of rational commitment, not evidence. The two are not the same: if you show me that my philosophical views lead to contradiction you do not thereby show that there is evidence for the truth of a contradiction.

    ** Both Matt and Chris question my sympathy for a doxastic construal of the Taking Condition. It may therefore help to say something about how I understand belief (it is, incidentally, a bit of a scandal for epistemology and philosophy of mind that we have no consensus account of belief).

    I think of belief in something like the way suggested by Hieronymi (2006) or Gibbons (2013). To believe that p just is to take an affirmative stand on the question whether p. In this respect, it differs from other ways of taking something to be true: you may take p to be true for the sake of the argument, but this does not mean that you have taken a stand on the question whether p.

    Of course while taking a stand on some question will generally involve various dispositions, I am not assuming that it is reducible to such dispositions. Taking a stand is a primitive normative notion: once you take a stand on a certain question, then it is appropriate for us to hold you to certain commitments — even if, as it turns out, you are pretty bad at sticking to your commitments.

    On this view of belief the doxastic reading of the Taking Condition seems natural, but I will discuss a specific disagreement on this point with Chris later on.

    Reply to Matt Boyle

    Matt is sympathetic to my negative argument, but raises a number of concerns regarding my own proposal. One of these, if I understand correctly, is that my discussion of considering and ruling out possibilities, even if not inferential in some sense, still draws upon the same sorts of rational capacities that I am trying to explain.

    But I think that Matt might be asking too much here. The account developed here is an account of reasoning — which I take to be a circumscribed, specific activity. Crucially, I take my topic to be more narrow than the question of what it is for a belief to be rationally based on another belief or intentional state.

    Matt acknowledges this point, by thinks that it is not enough to defuse the worry. He thinks that, although it may well be the case that a person who understands a proposition needs to explicitly reason her way to ruling out the propositions it does rule out, the sorts of capacities involved in this ruling out just are the sorts of capacities involved in reasoning.

    But I am not sure I agree with this. Consider for example the familiar (though not uncontroversial) idea that perceptual states can justify beliefs non-inferentially. By this they do not just mean the relatively banal point that in everyday perception there is no conscious act of reasoning from what we see to beliefs about our environment. The point is supposed to be that the justification that accrues to perceptual beliefs from perceptual states does not consist in the subject’s being in a position to grasp a good argument from her perceptions to her beliefs. In other words, the point is that the sorts of capacities that are involved in basing beliefs on perceptions are not the same capacities that you use in reasoning.

    I am inclined to say something similar in our case here. There is clearly a sense in which ruling out possibilities is grounded in your understanding of the relevant propositions. But I do not think that we need to conceive of this grounding in terms of reasoning.

    How do we conceive of it then? I doubt that a reductive account of understanding (e.g., in terms of dispositions) would work. Perhaps, just as in the perceptual case, we need to take understanding to be a sui generis rational capacity. I would not feel embarrassed if that were the outcome.

    Matt is further concerned about how my account can make sense of the idea of a subject’s taking some specific premisses to stand in a rational relation to a conclusion.

    Matt begins by claiming that you might take A, B and ~C to be inconsistent, but without taking any two to imply the exclusion of the third. I am not entirely sure what Matt has in mind here, but perhaps the following would be an example of his worry:

    A. Roses are red

    B. Violets are blue

    C. 1+1 = 2

    A, B and ~C are inconsistent; but one would not normally take C to follow from A and B in the sense relevant to reasoning.

    This seems correct, but the implications for my account are not clear. My account says only that if someone came to accept C in virtue of recognizing that A and B and inconsistent with ~C she would count as reasoning from A and B to C. It is hard to come up with a realistic story in which this would happen, but it is not obvious that if it did we would not want to count it as a case of reasoning. But if we do count it as a case of reasoning, and we also accept something like the Taking Condition, it seems that we should also accept that the subject’s recognizing the inconsistency was her taking it that C follows from A and B — bizarre though it may seem.

    Reply to Zoe Jenkin

    Zoe begins by raising a concern regarding my argument against rule-following accounts. She notes that my concern has to do with the fact that the supposed rules of reasoning have to do with rational relations among beliefs or other representations, while intuitively the reasoner need only attend to the subject-matter of her reasoning, rather than her own representations. She then suggests a couple of possible ways for rule-following theorists to reconcile the tension, either by arguing that the reasoner pays attention to both her own representations and the world, or that she pays attention to the world  indirectly, by paying attention to her representations.

    I think Zoe is a little unfair in the way she represents my argument. My discussion of rule-following theories begins with my arguing that existing versions of the rule-following approach do not work. The basic idea of that argument is that directly applying rules that are about your attitudes will only give you some judgment about what attitudes you may or may not have, not a judgment about the subject matter of your reasoning. There will, therefore, always be an extra step that you need to take to bring your reasoning to a conclusion. If this step is itself a case of reasoning, then we are going in circles; but if it is some kind of automatic process, then the appeal to rule-following becomes otiose.

    The part that Zoe focuses on comes after this argument, by way of diagnosis: we should expect rule-following theories to have trouble capturing reasoning from the reasoner’s own point of view (the sort of trouble I documented earlier), precisely because of their formal character. So if a rule-following theorist wants to avoid this conclusion, they would have to show that their view can avoid the problems sketched earlier.

    Zoe also offers two lines of criticism to my positive proposal. The first one partly concerns cognitive integration, and as I already acknowledged my account really is incomplete on this point.

    I think, however, that Zoe goes too far when she argues that my account does not have the resources to handle the difference between two people who both believe a set of premisses R from which a conclusion q deductively follows, with one of them having concluded q while the other has not. My account, unlike a standard Stalnakerian account, can handle this difference. This is because the “ways for things to be” that I discuss are not Stalnakerian possible worlds, since they do not have to be logically closed. I can believe a proposition, and so rule out “ways for things to be” in which it is not true, while not automatically ruling our ways for things to be in which its logical consequences fail to be true. On my account you do not automatically believe the consequences of what you believe. This is why, on my account, reasoning is a matter of noticing what your beliefs commit you to. This still seems right to me.

    Zoe’s second criticism concerns whether my account has the ability to distinguish between reasoning and other forms of transitions among beliefs, including brutely physical ones. While I acknowledge that the Taking Condition is only a necessary condition for reasoning, it does seem to me to be strong enough to rule out the cases Zoe is worried about. So I do not see a problem here.

    Reply to Chris Tucker

    Chris begins by asking whether I accept the “simple account” of reasoning:

    Simple Account: S infers proposition P2 from proposition P1 iff S believes P2 because both (i) S believes P1 and (ii) S recognizes that P1 supports P2.

    I accept half of this account: If you infer P2 from P1, then you believe P2 because both (i) you believe P1 and (ii) you recognize that P1 supports P2.

    But the other direction is implausible: my belief in P1 and my recognition that P2 follows from P1 might lead me to believe P1 in a deviant way. So I do not accept the bi-conditional.

    What I try to do in my positive account (the bit about considering possibilities) is explain just how your belief in P1 and your recognition that P2 follows from P1 combine to get you to believe P2 when things go well.

    Chris also wonders whether I accept “sufficiency”:

    Sufficiency: necessarily, if one believes both R and that P follows from R, then one believes P

    I do not accept Sufficiency as stated (though in fairness, I do not explicitly discuss it in this paper). As explained elsewhere (Valaris 2014) I do accept the following:

    If one believes both R and that P follows from R, then — barring inattention or irrationality — one thereby believes P.

    I have made this point by noting the Moore-paradoxicality of the following:

    R, and P follows from R, but I do not believe P.

    This seems like an incoherent thing to assert or believe, in just the same way as “P, but I do not believe P” does. And, I suggest, this is because believing the first part of this conjunction — namely, R and that P follows from R — normally just is believing that P.

    Notice, by the way, that the following schema is not similarly Moore paradoxical:

    R, and if R then P, but I do not believe P.

    Consider the following:

    If a Democrat wins in 2016, then if Hillary Clinton does not win Bernie Sanders will; a Democrat will win in 2016; but I do not believe that if Hillary Clinton does not win Bernie Sanders will. (see McGee 1985)

    I do not think there is anything incoherent about this (though it may be hard to work into casual conversation). Asserting this is tantamount to questioning whether rational commitment is closed under modus ponens. And doubting this does not seem incoherent (even if many may think it is a priori false).

    The same goes for the Paradox of the Preface that Chris worries about. Suppose P1&P2…Pn = Q. The following, then, is fine by my lights:

    P1, P2, …, Pn, but I do not believe Q.

    This expresses doubt about whether our commitments should be closed under conjunction introduction; and such doubt is not incoherent. What would (by my lights) be incoherent is this:

    P1, P2, …, Pn and Q follows from P1, P2, … Pn, but I do not believe Q.

    And, I submit, this really does seem incoherent. Indeed, if this were not intuitively incoherent it is hard to see why the Paradox of the Preface would be a paradox at all.

    So what should we say about the Preface? This is way outside the scope of the present paper, but there is one point that deserves brief note: if the right way to resolve the paradox is to reject the closure of rational commitment under conjunction, then we cannot assume that the real epistemic possibilities for a subject are classical possible worlds. But nothing major in my account hinges on that.

    Finally, Chris disagrees with the sympathy I show for the doxastic construal of the Taking Condition. The disagreement with Chris concerns the following sort of case:

    i. You are justified in believing E, and you infer P from E (because you take it that P follows from E), and thereby end up with a justified belief in P.

    ii. I am also justified in believing E, and I also infer P from E (because I take it that P follows from E), but the belief in P I end up with is not justified.

    If this is possible then there must be something about the way in which you and I respectively infer P from E that explains why I am not justified in believing P by inference from E while you are. And one way to capture this would be to say that you are, while I am not, justified in taking it that P follows from E. (Perhaps my reason for thinking this has to do with bizarre astrological beliefs, for example.) And this suggests that the “takings” in question are the sort of thing that can be epistemically justified or unjustified.

    Chris, in his comments, suggests a different explanation. He suggests that while E is evidence for P for you, it is not such evidence for me. This by itself is quite plausible; evidential relations are naturally relativized to background knowledge, and our background knowledge in this sort of case differs in relevant ways. But the question is whether it is explanatory. I do not have a worked out theory of evidence, so I am not especially confident in what I say, but here is a concern.

    An attractive way to think of evidence might be to say something like this: E is evidence for P for subject S just in case S there is a good piece of reasoning available to S that would get her from E to P. (This is crude, as it does not take into account differences in evidential strength; but set that aside.) If this is the right way to think of evidence, then saying that my reasoning in the earlier example is bad because E is not evidence for P for me is not explanatory.

    References

    Gibbons, John. 2013. The Norm of Belief. Oxford: Oxford University Press.

    Hieronymi, Pamela. 2006. “Controlling Attitudes.” Pacific Philosophical Quarterly 87: 45–74.

    McGee, Vann. 1985. “A Counterexample to Modus Ponens.” The Journal of Philosophy 82 (9): 462–71. doi:10.2307/2026276.

    Valaris, Markos. 2014. “Reasoning and Regress.” Mind 123 (489): 101–27.

    1. Hi Markos,

      Thanks for your helpful replies. Here is one rejoinder (I may have another one later, if I can formulate my other worry clearly).

      In my comments, I took you to be committed to something like a normal worlds reliabilist account of evidential support. I gave what I took to be counterexamples to that style of account. You counter that you really mean to talk about rational commitment, not evidential support: “The concept of rational commitment is closely related to the concept of epistemic possibility: you are rationally committed to P when there are no ways for things to be that are real epistemic possibilities for you and which make P false.”

      If the notion of epistemic possibility here is roughly what you gave in the original paper, I don’t think this change is enough to avoid the (alleged) counterexamples I gave in my comments. I would just reword them to concern rational commitment. (In fact, it might be that my term “evidential support” is roughly what you mean by “rational commitment.” There is more to evidence, on my view, than evidential support. My belief in E is evidence for believing P only if (i) E supports believing P, i.e. rationally commits me to believing P, and (ii) my belief that E is justified.)

      Consider the simplest example I give in the comments: my belief that coffee was first discovered in Ethiopia entails every necessary truth, so the denial of these necessary truths is not epistemically possible for me. So my belief about coffee’s origin, on your view, rationally commits me to believing every necessary truth. But that seems false: my coffee beliefs do not rationally commit me to believing every proposition that happens to be necessarily true. The other alleged counterexamples I give can also be reworded into the language of rational commitment. In other words, I don’t think you can avoid those alleged counterexamples as easily as you seem to think.

      Best,
      Chris

      1. Hi Chris,

        Thanks again for the comments! I have said very little about the space of epistemic possibilities (mostly because I don’t have especially firm views here), but I suspect that there might be ways to construe this idea such that the end result may be acceptable, actually.

        If you take epistemic possibilities to be roughly the same as metaphysical possibilities then you are right, the result is not palatable: it is not the case that everyone is rationally committed to the proposition that Cary Grant is Archie Leech.

        But I do not think this is obligatory, since I do not have an agenda of reducing the epistemic to anything else. (And neither do I expect an reductive account of “far-fetched” or “abnormal” possibility.) So I can take epistemic possibilities to be much finer grained than that. In fact, as I suggested in my paper, this is where I find a role for (so-called) rules of reasoning: these, on my view, are rules of rational commitment, not reasoning.

        Now even on this finer-grained picture there will be some necessities, and so you will have the result that everyone is automatically committed to them. But I don’t find that nearly as threatening as the analogous result with metaphysical necessity. I do not think it would be so counter-intuitive if I were to turn out to be rationally committed to a complicated mathematical theorem whose proof I would have no hope of grasping even if I tried.

  5. Thanks all for the fantastic posts! I wanted to push back on something in the penultimate two paragraphs of Markos’ response to commentaries, which will connect to some of the points earlier in the target paper.

    I have to admit that I was sympathetic to Chris’s diagnosis of the “spots” case, and I find the “not explanatory” response puzzling. I’m not sure I fully understand it, and I wonder if there’s a gap here between the perspective of a more content-oriented mind person like myself and the perspective of the target paper that’s preventing me from getting it.

    As best as I can tell, the response seems to me to conflate an explanatory project and “all things considered” justificatory project in a way that is not motivated. I don’t have to endorse A’s reasoning as good to understand it as A’s reasoning, to understand what A saw in the inference A drew, and explain A’s conclusion as a result. Granted, there has to be some minimal kind of coherence–maybe such that I, with a different learning background etc. could see myself having made a similar inference–but that’s a pretty low bar to clear compared to some of the other criteria considered in the discussion above–maybe even low enough to allow seemings. (If I’d been raised in Tom’s family, perhaps I’d take the spots as evidence for impending death too…)

    To try to motivate the latter idea a bit more, I also think it odd that you don’t want to hold people rationally responsible for reasoning based on transitions whose workings are opaque to introspection (if I understood correctly). Conceding this in the facial recognition-emotion case looks innocuous enough, but I think only because these judgments are typically so fleeting and unimportant that there usually isn’t much at stake. Consider a weightier case where inference might be based on seemings–say, a juror in a court case who considers all the evidence and votes to convict a dark-skinned defendant when the same juror would not have convicted a light-skinned defendant on the same evidence. I think we should hold the person responsible here, not just morally, but also rationally, for not having sussed out the biases at work in the judgment and considered the evidence against them. We might have less direct control over these seemings–it might only be through lots of engagement with different types of people or some explicit debiasing program that they could actually be modified, for example. And it seems that if we’re considering any broadly counterfactual account of explanation, the way the juror takes the case causally explains why they voted as they did, in the sense that if they hadn’t had that implicit bias, they would have voted otherwise. To scratch Fregean itches, we can say here that the way the defendant is presented to the juror in mental life (the “mode of presentation”) determines the resulting voting decision, even if the juror isn’t aware of the perceptual evidence that the “guilty” judgment is drawing upon. So I don’t see why the relevant ownership criteria couldn’t be satisfied here; one’s implicit biases are part of one’s view of the world–for better or for worse.

    Apologies if I missed something obvious! As you might imagine, I haven’t had as much time to read all of these as I might have liked…

    1. Hi Cameron,

      Thanks for the comment. I think I was confusing explanatory and all-things-considered justificatory concerns in the original paper at the spot where the disagreement with Chris arose, but I thought I had fixed that in the reply! The disagreement with Chris is about all-things-considered inferential justification, and how it is best explained.

      As for the latter point, I still think the two cases are importantly distinct. There may well be a rational requirement for good epistemic self-management. Perhaps if I have bad eyesight, and I regularly have to make judgements that require visual acuity greater than what I naturally possess, I am rationally required to get a new pair of glasses. But this, it seems to me, is quite different from the sort of requirement at play in cases of reasoning.

      There is, I guess, a further complication here. If I am aware that I have this visual deficit (or an implicit bias, in the case you are describing), then I might also be required to suspend my usual practice of immediate perceptual judgement, and instead treat my visual appearances as defeasible evidence about the external world. In that case, everything I say about reasoning would apply here as well. But I am assuming that this is not how perceptual judgement normally works.

      1. Hi Markos,

        Thanks for the additional reply, but I want to keep pushing. I didn’t mean the court case, or the kind of intuitive judgment I’m interested in, to get totally subsumed under the category of simple perceptual judgments, and I don’t want to get sidetracked over the old questions as to whether perception should count as inferential. The intuitive judgments I’m interested in might draw substantially on perceptual evidence, but they are quite different than a typical case of perceptual recognition.

        The line that being pushed of course is that such intuitive judgments are rationally sensitive to evidence, even in quite interesting and flexible ways, their sensitivity is just largely opaque to introspection and typically plays out over longer timescales. Contrast these with perceptual illusions–the kind of case you seem to be trying to equate them with–where it matters not how long I look at the Muller-Lyer lines, my perceptual assessment of their relative lengths will never change.

        So I would happily concede that there are interesting differences between (at least) three tiers of judgment–basic perceptual judgments, opaque intuitive judgments, and explicit inferential judgments. The claim is just that the considerations which led us to call the latter rational (sensitivity to evidence, ownership, intensionally-sensitive causes of belief and behavior) should lead us to believe that the joint with respect to explanatory rationality should go between the first and second category, rather than between the second and third.

  6. Thanks for a fascinating discussion everyone. Rather like Cameron, I’d like to resist Markos’ suggestion that rapid judgements about other’s emotions, based on subtle facial and behavioural cues, aren’t inferences. But I’m inclined to to do so by arguing that they do meet some version of Frege’s condition, so I don’t think it harms your main thesis.

    You say that “when making such a judgment you need not be aware of the grounds on which you have made it.” But you do need to at least perceive the grounds, and there seems to be at least a sense in which you ‘take them as supporting’ the judgement – namely you perceive them as expressing the emotion. Of course there’s a lot of second-order stuff the subject often doesn’t know: you can’t verbally articulate which particular cues are the grounds of that judgement, and your ability to attend to them as such is very imperfect, though not absent (I say not absent because it seems quite possible to scrutinise the face that ‘just looks surprised’ and try to work out what the relevant features were).

    It seems to me that this kind of semi-explicit awareness matches what you can also find in some cases that are obviously inferences. Cameron used the example of a biased juror, but consider even an exemplary juror: after going back and forth over the evidence, trying to work out whether to believe one person’s story or another, they might eventually choose one, based on the evidence available, and with a strong feeling that the evidence overall supports one story over the other (‘it just doesn’t add up’, they might say about the other stories), but without being able to identify all, or even any, of the specific grounds of that judgement. But that case can’t be perception, and I’d be very reluctant to say that it’s not inference.

    (A lot of the examples in Jack’s paper, of states that cannot be reported but still have enough inferential and evidential links to other states to be called ‘beliefs’, are probably relevant here, and could be trivially modified so that the agent makes some sort of ‘inference’ from their ‘belief’.)

    “If you misread another’s facial expressions, your mistake is more akin to a perceptual illusion than a case of bad reasoning.”
    I think this is partly true. There is a degree of involuntariness to these kinds of perceptual judgements, and involuntariness makes them seem less agential and thus less like inferences (though inferences can’t be purely voluntary, obviously). But there’s also a degree of voluntary control present that’s absent with, e.g. the perception of colour: we can choose to ‘try to see’ a face as scared rather than surprised (rather like we can try to see a picture of a duck’s head as a picture of a rabbit’s head), but we can’t try to see a red thing as yellow. I think this makes rational criticism partially appropriate: to the extent that people can manage and moderate their perceptual habits, they can be held accountable for failing to do so, or for doing so in irrational ways.

    As I said, this is sort of a side-issue to your main thesis, and I don’t think adopting a wider view of the extension of ‘inference’ would harm your case. But I am independently interested in this particular case, so I wonder what you think about it.

    1. Thanks Cameron and Luke, these are really interesting points; you are right to resist assimilating opaque intuitive judgements with immediate perceptual judgements. (I actually try to be more careful about it in a more recent version of the paper where I contrast two supposedly “System 1” judgements: a straight perceptual illusion and the mistaken intuitive judgement in Linda the Bank-teller case.)

      The way you elaborate the juror case actually seems to fit well with my intentions in the paper, although you might not guess that since most of my examples were from trivial deductive inferences. But a large part of what I’m trying to do is push against the tendency to think of reasoning as a formal affair. On the contrary, I take reasoning to rely heavily on background knowledge and assumptions (as opposed to premisses), and capacities for considering possibilities.

      So, the way I would describe the juror case is as follows. When she receives the evidence, her present background knowledge and assumptions can be seen as embodied in an ordering of plausibility (what I call “normality” in the paper) on the possibilities she now has to choose from: some of those possibilities will seem more far-fetched than others. She doesn’t have to be aware of why some possibilities seem more far-fetched than others; she just needs to be aware that they do.

      This is perfectly consistent with the Taking Condition, as the Taking Condition (applied to the juror) would only require her to take it that the evidence presented in the case supports verdict V (in the sense that in all the most plausible epistemic possibilities consistent with this evidence, V).

      In other cases, another option would be to treat your own cognitive feelings as evidence. So the fact that Raji seems angry to me might be some reason to think that she is angry, but the fact that she says that she is not is reason to think that she isn’t; and I have to weigh the two. That would be a straightforward inferential case, by my lights.

      1. This is all what I had hoped you would say. 🙂 I will bow out now since the System I stuff doesn’t directly bear on your interesting positive proposal, but thanks again for the interesting paper and responses!

Comments are closed.

Tweet
Share
Share