Chris Tucker (College of William and Mary)[Jump to comments]
Just as theory of representation is deficient if it can’t explain how misrepresentation is possible, a theory computation is deficient if it can’t explain how miscomputation is possible. You might expect, then, that philosophers of computation have well-worked out theories of miscomputation. But you’d be wrong. They have generally ignored miscomputation. Worse still, when it hasn’t been ignored, it’s been conflated with malfunction of a computing mechanism. Piccinini claims that “if the [computing] mechanism malfunctions, a miscomputation occurs” (Piccinni 2015a, sec 2.5; 2015b: 122). Fresco and Primiero make a similar mistake: “When a [computing] system fails to accomplish the purpose for which it was designed, a miscomputation can be identified” (2013: 257).
Miscomputation is a special kind of malfunction. If the battery dies, a system may fail to compute what it is supposed to compute. But it’s not miscomputing, because it’s not computing at all. Just as something doesn’t misrepresent unless it represents, something doesn’t miscompute unless it computes. To miscompute is to compute in a way that violates a computational norm. Consequently, an adequate account of miscomputation requires an account of what the system is computing when the system is violating the relevant computational norms. I argue that providing this account is easy for the computational individualist, but hard for the computational externalist.
Computational individualism is the claim that the computations a system performs are individuated narrowly. In other words, if we hold the laws of nature fixed, then a system’s computational structure supervenes on its physical structure. This view entails that neither a system’s environment nor its role in some larger system affects its computational structure. Its denial is called computational externalism. Computational externalism is the dominant position on computational individuation, but do not confuse it with content externalism. I assume, for the sake of the paper, that content externalism is true. It does not follow that computational externalism is true. Causal connections to water can affect the content of our thoughts, even if they can’t affect the computational structure that underwrites those thoughts.
In §1, I briefly present Piccinini’s mechanistic theory of computation and argue that it can be combined with individualism. In §2, I show that this individualist, mechanistic theory easily accounts for miscomputation. In §3, I argue that externalism has difficulty accounting for miscomputation.
1. Computational Structures and Functional Structures
I focus on digital computation. Oversimplifying, a digital computing system is an input-output device in which the relevant inputs and outputs are strings of digits. A digit is a state of the system. In simple systems, digits are often just electrical charges, where charges of different voltages can count as different digits. A string of digits is an ordered list of digits. For example, a string of digits in a system might be a series of electrical charges. A system computes when the formal properties of the input string “lead” the system to produce a certain output string, e.g., a different series of electrical charges. In the abstract, a computational structure is a complete mapping from the possible input strings to the possible output strings. A system has or implements a certain computational structure when the structure, or mapping, is a correct description of the system’s (actual and counterfactual) behavior. That much is relatively uncontroversial. In the rest of this section, I present a controversial theory of (when a system implements a) computational structure and argue that it can be combined computational individualism.
Piccinini (2007; 2008; 2015b) has defended a mechanistic theory of computation, which holds that a system’s computational structure is determined by a specific kind of dispositional or functional structure. On this view, to say that a system implements a certain computational structure is not merely to describe the system: it is to explain, at a certain level of abstraction, why the system is doing what it’s doing (or would do were it given a certain input). The relevant type of explanation is a special form of dispositional explanation. We explain what the salt is doing while submerged, in part, by pointing out that it is soluble, that it has a certain dispositional structure. When we explain a system’s behavior by appealing to a dispositional structure that counts as computational, we provide a computational explanation of the system’s behavior. Let the definitive list be the complete list specifying which properties are necessary and sufficient for a dispositional structure to be computational and which properties further individuate a system’s computational structure. The definitive list, then, is the correct and complete account of which functional structures count as computational and which differences between functional structures make a computational difference.
I’ll pretend that Piccinini has given us the definitive list, but even he admits that he hasn’t done so (2015: 120). He has, however, made substantial progress. One property on the definitive list is medium-independence. To count as a computing system, a system’s behavior must be explicable at a level of abstraction that makes no reference to the media in which the behavior is carried out. Medium-independence is stronger than multiple realizability. There is more than one way to realize removing corks from wine bottles, but this behavior is necessarily performed on certain kind of media, namely wine bottles and corks. This behavior is multiply realizable but not medium independent (Piccinini 2015b: 122-3). Medium independence imposes a significant constraint on which systems compute. Most mechanisms fail to meet this condition (Piccinini 2015: 146). The behavior of the digestive system is, for example, “quintessentially medium-dependent”: its processes are defined in terms of “specific chemical changes to specific families of molecules” (147; cf. Chalmers 2011: 337-8). By requiring medium-independence, Piccinini’s mechanistic theory avoids pancomputationalism (the idea that everything computes). This is a substantial achievement. Some otherwise plausible accounts, such as that of Chalmers’ (2011), fail to do so.
The functional structure that meets the conditions of the definitive list does more than identify which systems compute. It also reveals the computational structure of computing systems. Shagrir (2001) observes that voltage values could be grouped in many ways. We could treat any voltage up to five volts as a single digit. Alternatively, we could treat any voltage less than 2.5v as single digit and any voltage from 2.5-5v as a distinct digit. An adequate account of computational structure must determine which groupings reflect the computational structure of the system. The mechanistic theory claims that the correct grouping is determined by “the components of the mechanism, their functional properties, and their organization” (Piccinini 2015b: 132). In other words, functional significance determines computational significance. If a system’s outputs aren’t differentially sensitive to input voltages of ≤5v, then the computational structure of a system treats those voltages as a single digit rather than two. In sum, to attribute a certain computational structure to a system is to attribute a certain functional structure to the system that explains its actual and counterfactual behavior.
This sketchy presentation of the mechanistic theory is enough for my purposes. You can always consult Piccinini’s work for the details. I endorse Piccinini’s theory save two differences. I discuss the second in §2.3. The first difference concerns how the relevant sort of functional structure is individuated. He claims that it is individuated widely, and I claim it is individuated narrowly. He’s a computational externalist, and I’m an individualist. Suppose S is a computing component of some larger system S. Piccinini assumes Digital Perseverance: necessarily, feature F counts as a distinct digit for S only if F counts as a distinct digit for S. The computational structure of the whole constrains how digits are individuated for the part (2008: 229, 2015b: 41; cf. Bontly 1998: 570, and Segal 1991: 492-3). If Digital Perseverance is true, then so is computational externalism. For, given Digital Perseverance, the computational structure of a part can’t supervene on its physical structure, though it might supervene on the physical structure of the whole mechanism of which it is a part (cf. Segal 1991: 492-3).
I reject Digital Perseverance. Where Piccinini sees the computational structure of the larger mechanism imposing constraints on the computational structure of the component part, I see computational significance getting lost in composition. First, it is uncontroversial that a complete and correct computational description of a whole device and each of its parts is compatible with the parts performing different, usually simpler, computations than the whole mechanism (cf. Egan 1995: 192). If the part and whole can perform different computations, why can’t the part perform computations on more digits than the whole? Why must the whole mechanism care about everything each of its parts cares about? Without an answer to these questions, there is no reason to deny that parts can compute over more digits than their wholes.
Second, it’s possible to have a computing device, such that one of its computing parts is differentially sensitive to three voltage ranges when the device as a whole is differentially sensitive only to two voltage ranges. If you think that computational structures track functional significance as Piccinini and I do, then you have some reason to treat the larger device as computing over fewer digits than one of its parts.
We can, therefore, tentatively reject Digital Perseverance and combine computational individualism with a mechanistic theory of computational structure. This tentative combination will prove to be a smart move. In section 2, I will show how easily this combination accounts for miscomputation. In section 3, I will show that computational externalism struggles to account for miscomputation, precisely because of its externalism.
2. Miscomputing Individualistically
We can work our way toward understanding miscomputation by considering some analogies. To misbehave is to behave in a way that violates some relevant norm (e.g., the norms of rationality, morality, or etiquette). To miscommunicate is to communicate in a way that violates some relevant norm (what is communicated is not what was intended to be communicated). To misrepresent is to represent in a way that violates some relevant norm (e.g., truth). You get the picture. To miscompute is to compute in a way that violates some relevant norm, more specifically, a norm for what the system should be computing. It is controversial what gives rise to computational norms, though it is popular to appeal to the system’s function or a designer’s intentions. I focus on the case in which a system miscomputes by computing one function when it should have computed a distinct function. This is the type of miscomputation that I think the externalist has the hardest time making room for. I do think, however, that a system may miscompute in other ways (e.g., computing exactly one function when it should have been computing multiple functions in parallel).
A miscomputation, so understood, is a special kind of malfunction. It is special in two ways. First, while malfunctions require normativity at some level of description, miscomputation involves normativity, more specifically, at the computational level. Where there is no computational behavior that the system should be performing, there is no miscomputation. Second, not all computational failures count as miscomputations. Some such failures are merely mechanical. If the battery dies, the computing system won’t compute anything at all. And if it doesn’t compute anything at all, it’s not miscomputing, just as a diagram doesn’t misrepresent unless it represents something. Or suppose that the battery, fan, and solar panels function properly, but the component that should produce the computationally relevant inputs is completely broken. The device consequently fails to compute anything at all, so it’s not miscomputing. To miscompute is to compute. As I mentioned in the introduction, this point is often neglected, and its neglect explains why no one has adequately accounted for miscomputation thus far.
An adequate account of miscomputation, so understood, involves at least three components: an account of computational behavior (what computation, if any, a system is performing); an account of computational norms (what computation(s) the system should be performing); and an explanation of how these two accounts together make it possible for a system to compute in a way it should not be computing. The latter explanation may be as trivial as pointing out that, in circumstances C, the account of computational behavior entails that the system computes f1 when the account of computational norms says that what should be computed is a distinct function f2. Computational individualists can provide the requisite components without difficulty; externalists cannot.
2.2 Miscomputation Explained
Before presenting the details of my own account, we need to take a step back and appreciate the connection between computational structure and computational behavior. In what follows, I make two simplifying assumptions. First, I assume that a system always manifests its dispositions (and so computational structure) when triggered by the relevant input conditions. This allows us to ignore various complications, such as the possibility of masking, performance error, etc. Witches and protective Styrofoam can mask a vase’s fragility so that the vase won’t break when dropped. We set aside such possibilities. We assume that, when dropped, a vase will manifest it’s fragility by breaking. A computing system likewise performs the computations that manifest its computational structure. Second, I assume that all computation is deductive or non-probabilistic. When such a computing system receives a computational input and manifests its computational structure, it is guaranteed to produce a specific computational output. These two assumptions are harmless. Any adequate account of miscomputation will allow a computing system to miscompute when it manifests a deductive computational structure. They simplify our discussion by letting us assume that any difference in computational behavior must be explained by a difference in computational structure.
Recall from section 1 that, in the abstract, a computational structure is a complete mapping of computational inputs to computational outputs. A physical system has a certain computational structure iff the structure/mapping counts as a correct description of the system’s actual and counterfactual behavior. The computational structure of a system tells us what the system would do were it to receive a given computational input. Perhaps when given string 0,1 as an input, it outputs 1,1. When you know the actual inputs to the system (and you assume that a deductive computational structure is manifested), the computational structure tells you the actual computational behavior of the system. In other words, computational structure + computational inputs = computational behavior.
Any account of computational behavior will have this same basic structure. The main difference between rival accounts of computational behavior will be their respective accounts of computational structure. When circumstances remain fixed, a difference in computational behavior requires a difference in computational structure.
My account of a system’s computational behavior begins, naturally enough, with my account of computational structures. Recall that, on my view, a system’s computational structure is determined by its internally individuated functional structure (that satisfies the definitive list). To determine what computations are being performed by the system, just plug in the computational inputs (i.e., those states of the system that play the relevant kind of functional role in the behavior of the system).
I deny, however, that narrowly individuated functional structure determines the computational norms for the system. The norms that guide a system’s computational behavior are given, at least in part, by something external to the system itself, e.g., the evolutionary history of the system, the intentions of a designer, the role that system plays in some even larger system, etc. To account for miscomputations, just pick your favorite account of the normativity associated with functional roles. Any of the standard accounts will do, and so will Piccinini’s (2015b, ch 6). Any of them are sufficient for my purposes, because they individuate norms widely. Since computational behavior is individuated narrowly and computational norms are individuated widely, it’s easy to see that a system can compute a function that it isn’t supposed to compute.
Suppose, for illustration, that the normativity for a manufactured system can be supplied by the intentions of the designer. If so, then miscomputations can arise because of design error (cf. Piccinini 2015: 149). I might intend for system S to compute function f1 but mistakenly construct it so that it computes function f2 instead. To compute f1, perhaps the system needs to be differentially sensitive to three voltage ranges when its current construction makes it differentially sensitive to only two. In such a case, S would be miscomputing.
Or suppose that S is a computing component of some larger biological system S. In order for S to make its essential contribution to the biological fitness of S (or whatever determines S’s teleological function), it needs to compute function f3. But a component of S is damaged (S has a brain lesion perhaps), so it computes f4 instead. S is miscomputing. f4 is the computation that actually describes the behavior of S, when it should have behaved so as to be correctly described by f3. Again, given my account of computational structure and any standard account of computational norms, S would be miscomputing.
2.3. Two Senses of Functional Structure
To better understand how my account of miscomputation works, we need to disambiguate two senses of function and functional structure. Functions are special kinds of dispositions. Roughly, a component has a disposition to X in circumstances C iff it tends to X in C. Hearts are disposed to pump blood when they receive the relevant sort of electrical charges and are connected to blood vessels in the relevant sort of way and so on. Hearts are also disposed to make noise in those same circumstances. Yet not all dispositions have the same sort of explanatory significance within a larger system. A component’s disposition is a function of that component iff the disposition is needed, at a certain level of abstraction, to account for the dispositions and behavior of the overall system (cf. Cummins 1983: 28-9). At the biological level of abstraction, we need to appeal to the heart’s pumping blood—but not its making noise—in order to account for the dispositions and behavior of the circulatory system. Hence, all functions are dispositions, but not all dispositions are functions. To be a function is to be a disposition that plays an explanatory role in a larger system.
A functional structure of a system represents, at a certain level of abstraction, how the dispositions of each component underwrite the dispositions (powers, capacities) of the overall system. When the dispositions of the various components are manifested, they work together to explain the behavior of the overall system. The functional structure of the circulatory system would not only represent the heart’s contribution, but also the contributions of blood and blood vessels, to the dispositions and operation of the circulatory system. Functional structures track how the functions of the components work together to account for the dispositions and behavior of the overall system.
Functional structures, so understood, are purely descriptive. They describe, at a certain level of abstraction, how the dispositions of the components actually work together to explain the system’s dispositions and behaviors. There is no further claim that this is how the various components should work (together). Perhaps it is and perhaps it isn’t. When a system functions improperly—when actual and proper function come apart—the purely descriptive notion tracks actual function.
There is also a normatively loaded sense of function and functional structure. To say that the function of the heart is to pump blood, in this sense, is to say that the heart is supposed to pump blood. This sense of function doesn’t track how things actually work but how they should work. The normatively loaded sense of functional structure represents, at a certain level of abstraction, the dispositions the components should have and how those dispositions should work together to underwrite the dispositions the system should have.
The purely descriptive and normatively loaded senses come together in properly functioning systems. If a system is functioning properly, a system’s purely descriptive and normatively loaded functional structures are identical.
In contrast, the purely descriptive and normatively loaded senses come apart in malfunctioning systems. If a system malfunctions, one functional structure will describe the actual organization and operation of the system and a distinct structure will describe how the system should be organized and how it should operate. Malfunction is possible only when actual (purely descriptive) functioning deviates from proper (normatively loaded) functioning.
The purely descriptive/normatively loaded distinction is not the narrow/wide distinction. The latter distinction concerns whether, at a certain level of abstraction, a system’s environment or its role in a larger system can affect the individuation of its current structure and behavior. The former concerns whether, at a certain level of abstraction, a certain structure and behavior are (pure) descriptions of or norms for a given system. It is ordinarily assumed that norms must be individuated widely, but, in principle, purely descriptive functional structure could be wide or narrow. My view of computational structures is that the purely descriptive structure is narrow and the normatively loaded is wide. In other words, I think computational behavior is narrow and computational norms are wide. At other levels of abstraction, both behavior and norms may be wide. One externally individuated functional structure might provide the norms for our mental structure and behaviors (e.g., the norms of rationality), and—given our assumption of content externalism—a distinct widely individuated functional structure would specify our actual mental structures and behavior.
I identified, in §1, one difference between my mechanistic theory and that of Piccinini: my version appeals to narrow functional structure and his appeals to wide functional structure. A second difference is that my version appeals to purely descriptive structures and his to normatively loaded ones (cf., e.g., Piccinini 2015: 113-4, 151). I claim, then, that computational structures are determined by a specific type of narrowly individuated, purely descriptive functional structure. The specific type is the type that satisfies the definitive list, the list of properties that distinguish computational functional structure from other kinds of functional structure.
Narrowly individuated functional structure is, of course, not a good candidate to account for normatively loaded functional structure, or more specifically, the computational norms of a computing system. That is why, when I discuss the norms for a device—computational or otherwise—I follow just about everyone else in asserting that the norms are individuated widely. Miscomputation is made possible on my account, precisely because there is a gap between the narrow individuation of (purely descriptive) computational structure and the wide individuation of (normatively loaded) computational norms. A system miscomputes when its behavior manifests a narrow computational structure that the widely individuated norms say that it should not have.
My account of computational structure does impose one constraint on computational norms that I should mention. Recall that, in properly functioning systems, a system’s purely descriptive functional structure is identical to its normatively loaded functional structure. In properly functioning computing systems, the system’s actual computational structure will be identical to the computational structure that it should have. Since (actual) computational structures are narrowly individuated, the computational structure that a system should have must be specifiable in narrowly individuated terms.
Contrast the following two norms for a given system S:
Norm specified in wide terms: when the input voltage is >5v, output the voltage that will allow the larger system to operate as an and-gate.
Norm specified in narrow terms: when the input voltage is >5v, output >5v.
The first norm is specified in wide terms, because it references the larger system of which S is actually a component. The second norm is specified in narrow terms, because it mentions behavior that can be individuated internally to the system. There is no reference, explicitly or implicitly, to things beyond the system itself. Essentially, the internally individuated structure a system should have is whatever internally individuated structure properly functioning versions of that system do have. I say computational norms are widely individuated not because they are specified in terms that reference things beyond the internally individuated states and structure of the system; rather, computational norms are widely individuated because what makes something the proper internal structure for a system is determined by things beyond the internally individuated states and structure of the system (e.g., what it takes for the system to survive in its environment, the goal of some designer, etc.).
3. Miscomputation and Computational Externalism
You have just seen how easily computational individualists can accommodate miscomputation. Since they can endorse internally individuated computational behavior while holding onto externally individuated computational norms, it is no mystery how a system can compute a function that it shouldn’t. In this section, we see that computational externalism—the claim that computational structures are individuated widely—makes it much more difficult to explain how miscomputation is possible.
3.1. Some Options for the Externalist
Recall that, to account for miscomputation, one must do more than show that computing systems can malfunction. If a computing system is so broken that it fails to compute at all, it is malfunctioning but not miscomputing. An account of miscomputation must have at least three parts: (i) an account of computational behavior; (ii) an account of computational norms; and (iii) an account of how it’s possible for a system to compute something it isn’t supposed to be computing. (iii) imposes a constraint on one’s account of computational behavior, namely that it must be possible for malfunctioning systems to compute. Otherwise, it will never be the case that a system is computing in a way that it shouldn’t.
Piccinini provides, as far as I’m aware, the only extended philosophical discussion of miscomputation. When Piccinini (2015: 148-50) “explains” how his mechanistic theory of computation accounts for miscomputation, he fails to explicitly address any of (i)-(iii). He does describe several different kinds of miscomputation; however, he never even attempts to show that his mechanistic theory makes those kinds of miscomputation possible. The problem, as I mentioned in the introduction, is that Piccinini didn’t properly distinguish malfunction and miscomputation. He shows, at most, that a computational system can malfunction. Yet he fails to account for miscomputation, a very specific type of malfunction.
In chapter 6, Piccinini provides an account of wide functional structure that is teleological in nature: to be a function is to make a stable contribution to attaining certain goals. Roughly, for organisms, the goal would be survival and, for artifacts, the goal would be some goal of a designer. This kind of functional structure is wide because these goals are determined, in part, by something beyond the system itself. Let us say that a system’s teleologically individuated computational structure is the computational structure the system needs in order to contribute, in a stable way, to the relevant goals of the system or its designer. It is natural to suppose that the teleologically individuated computational structure plays an important role, for Piccinini, in accounting for miscomputations. Here are some options.
- A system’s computational structure and computational norms are both necessarily identical to the system’s teleologically individuated computational structure.
- A system’s computational structure—but not its computational norms—is necessarily identical to the system’s teleologically individuated computational structure.
- A system’s computational norms—but not its computational structure—is necessarily identical to its teleologically individuated computational system.
Option A makes miscomputation impossible. If both computational structure and computational norms are identical to the system’s teleologically individuated computational structure, then a system always has the computational structure it should have. Yet a system doesn’t malfunction unless it has a computational structure it shouldn’t; otherwise, the system will always compute whatever it is supposed to. And where there’s no malfunction, there’s no miscomputation. I tentatively think that Piccinini (2015b) committed himself to option A. Even if I’m right about this, my job is not done. To show that miscomputation poses a serious problem for Piccinini (and the externalist, more generally), we need to show that Piccinini can’t easily account for miscomputation by choosing one of the other options.
Option B also seems hopeless. The problem for B is also a problem for A: both A and B claim that teleology determines computational structure. Given the same inputs, A and B claim that there is no change in computational structure without a change in teleological function. The result is disaster. Suppose that S1 and S2 and S3 are all computing systems of kind K, and their teleological function requires them to perform a certain computation. In particular, they are required to output O1 (say, ≤5v) when they receive input I1. While S1 fulfills its teleological function, S2 and S3 do not (they’re broken). When given I1, S1 outputs O1 (≤5 volts), S2 outputs a distinct output O2 (9 volts), and S3 is so broken that it doesn’t output anything at all. S1 and S2 seem to be computing distinct functions, and S3 doesn’t seem to be computing at all. It seems absurd to claim that S1, S2, and S3 are each computing the same function. Yet this absurdity is just what A and B entail. Since S1, S2, and S3 share the same teleological function, they have the same computational structure. As explained in 2.2 above, the same computational structure plus the same computational inputs guarantees the same computational behavior.
The moral is that the correct account of computational structure must be at least partly independent of the system’s teleological function (that satisfies the definitive list). It must be possible, in other words, for damage to a component to change the computational structure of the component without changing its teleological function. In retrospect, this shouldn’t surprise us. Computational structure and behavior track how the system is actually structured and actually operating, not how it should be structured and should be operating. Any account of computational behavior that relies on teleological norms is taking a normatively loaded functional structure and trying to make it play a purely descriptive role. We shouldn’t be surprised that there’s a poor fit.
I should stress that teleological theories of content have nothing to fear from this objection, precisely because they use teleology to play a normatively loaded role and only use it for such a role. Crudely put, such theories hold that a mental state M represents C when C is what should cause C, where the relevant teleology determines what should cause C (cf., Neander 2012, sec 3). This normativity helps account for misrepresentation. A cleverly disguised mule might be the actual cause of my belief that there’s a zebra. This is a case of misrepresentation (a false belief), because what caused my belief (the mule) is not what should cause it (a zebra). We should learn from this account of misrepresentation. Just as we can vary the actual cause of M without varying M’s teleology, we can vary the actual computational structure of a system without varying its teleology.
My individualist, mechanistic theory avoids my objection to A and B. The correct account of computational structure needs to let computational structures vary with changes to the actual composition of the system even when those changes don’t yield a difference in teleology. By tying computational structures to narrowly individuated functional structures (that satisfy the definitive list), we accomplish precisely that. We allow damage to a system to affect its computational behavior without affecting its teleological function. Just as some hearts beat in ways that fail to fulfill their teleological functions, some computing systems compute in ways that fail to fulfill their teleological functions.
Option C is the only externalist option that has a hope of accounting for miscomputation. It uses teleology for what it is well-suited to do and only for what it is well-suited to do. On this option, teleology provides an account of computational norms, but not an account of computational behavior. This is as it should be. The teleology for computing systems provides a much better account of their biological/computational norms than actual behavior. As it stands, though, C fails to provide a complete account of miscomputation. It must be supplemented by an account of a system’s (actual) computational structure and behavior. There are at least two ways of supplementing C.
The first begins by claiming that the computational structure of a malfunctioning computing system is necessarily identical to its narrowly individuated functional structure. This first way is able to explain miscomputation in roughly the same way I did: it is the gap between wide norms and narrow computational structure that makes miscomputation possible. But does the resulting view remain an externalist view? Or does it require conversion to individualism?
It depends. So far the first way specifies only that the computational structure of malfunctioning systems is identical to their narrow functional structure. If the computational structure of a properly functioning system is also identical to its narrow functional structure, then the computational structure of every computing system is narrowly individuated. The resulting view just is the individualist account of computational individuation I developed earlier in the paper. The first way of supplementing C remains a genuine externalist position only if it claims that the computational structure of properly functioning systems is wide. Such a position seems ad hoc. It holds that the computational structure of malfunctioning systems is narrow and that of properly functioning systems is wide. But why would the scope of the individuation vary according to whether a system functions properly or not? In short, the problem with the first way of supplementing C is that it is either ad hoc or not a genuine alternative to my individualist account of miscomputation.
The second way of supplementing C is, perhaps, more promising. Recall that C holds that computational norms are determined by a system’s teleologically individuated computational structure. The second way of supplementing C claims that a system’s computational structure—whether properly functioning or not—is determined by some other sort of wide structure.
If the content of computational states were required to individuate computational structure, then this second way would have some promise. The kinds of causal connections that partly individuate content seem better suited to determine actual computational structure and behavior than do teleological functions. After all, given content externalism, these causal connections are already employed to partly determine actual behavior at the mental level of abstraction. Perhaps these causal connections also partly determine the behavior of the system at the computational level of abstraction too. In the good ol’ days, of course, the rallying cry of computationalists was No computation without representation! (Fodor 1975: 34; Pylyshyn 1984: 62). Rallies are exciting. Ra ra and all that. Yet I think it’s better to leave representation out of the individuation of computational structure. The next sub-section explains why.
3.2. Content to the Rescue?
Prima facie, my individualist story about computational structure will be simpler than any externalist account that employs content. Theories of (wide) content generally individuate content by appealing to both causal relations with the environment and the functional structure of the representing system. That they appeal to the latter can be easy to forget, because the debate about content externalism already presupposes that we are talking about thoughts and representations. Tigers cause my hair to stand up on its head, and they cause my belief that the tigers are coming to get me. The standing of my hair might track the presence of tigers, but it does not represent them. On the other hand, my belief does represent tigers. What explains this representational difference? The answer is that only beliefs play the right kind of functional role within the system. It’s controversial, of course, what sort of functional role is necessary for representation, but mere tracking is not enough. (For an excellent discussion of representation in the context of perception, see Orlandi 2014, ch 3.)
Semantic accounts of computation need functional structure in a second way. The appeal to content is best suited to individuate computational states. Individuating computational states is necessary but not sufficient for individuating computational transitions, or computations. If computations C1 and C2 consist in the same input and output states, it does not follow that they are identical. The computational identity of their transitions is “bound together” with the complete mapping (or rule) that correctly describes the behavior of the system. C1 and C2 will be distinct computations if they manifest distinct computational structures. Analogously, suppose that systems S1 and S2 receive 2,2 as inputs and then output 4. It doesn’t follow that they are performing the same mathematical function, because S1 might be performing addition and S2 might be performing multiplication. It doesn’t follow because the complete mapping from inputs to outputs that describes S1’s (actual and counterfactual) behavior might be distinct from the complete mapping that describes S2’s (actual and counterfactual) behavior.
If you want computational individuation to be an objective, non-vacuous affair (if you don’t, just play along), then here’s a question: what determines which mapping/rule is being followed in a given computation? The only answers that I’m aware of ultimately appeal to some form of functional structure. This includes accounts, such as Chalmer’s (2011) and Piccinini (2015b), that emphasize the functional or dispositional feature of their views. Yet even classic proponents of semantic accounts, such as Pylyshyn (1984: 66-9) and Fodor (e.g., 2008: 77), demand a certain kind of causal structure for a transition to even count as a computation (they endorse the “nearly ubiquitous view” discussed below). When individuating computations, the appeal to functional structure appears inevitable.
If we must appeal to functional structure to individuate computations, a theory is prima facie less simple if it appeals to content as well. In order to justify the appeal to content, then, the externalist has an extra burden that the individualist does not have: she must show that the appeal to content is worth the extra complexity. I don’t think this demand can be met. There is just nothing for content to do at the computational level of description. I can’t provide a complete defense of this claim here, but Chalmers (2011: 334-5) and especially Piccinini (2008; 2015b, ch 3, 7) have made many of the key points. Here I will just briefly respond to one recent argument that content is relevant to a system’s computational structure.
The “nearly ubiquitous view” of (digital) computational description is that computational transitions consist solely in the “formal” or syntactic manipulation of strings of digits (Rescorla 2014: 175). On this view, at the computational level of abstraction, the only properties of input strings that are causally relevant to which output string is produced are their formal properties—such as, which digits appear in the string, how many times, and in what order. Since the formal properties of strings can supervene on the narrowly individuated functional structure of a physical system, this formal conception of computation is compatible with the version of computational individualism that I’ve defended in this paper. One strategy for defending externalism is to challenge this formal conception and to argue that content is required to fully capture, at the computational level of description, why the system transitioned from one string to another. Rescorla attempts to make good on this strategy.
Rescorla (2014: 175) claims that “I reject the nearly ubiquitous view that digital computation consists [solely] in ‘formal’ syntactic manipulation.” I do think that Rescorla establishes, quite forcefully, something interesting and worthwhile. What he shows is that, given Woodward’s interventionist view of causal relevance, the representational properties of strings are causally relevant to which string a system outputs. That’s interesting, and it helps to buttress the computational theory of mind from epiphenomenalism; however, it’s not tantamount to taking down the formal conception of computational description. To do that, Rescorla needs to establish the additional thesis that the causal relevance of representational properties occurs within the computational level.
As Rescorla recognizes (e.g., 185), just because one factor casually influences another, it doesn’t follow that the influence occurs within a single level of description. For example, given interventionism, Kim’s (2005, ch 2) famous causal exclusion argument fails. That is, mental content can causally influence what’s happening at the neural level, even though the mental level is thought to be realized by (and so is distinct from) the neural level (cf. Rescorla 2014, secs 6.1 and 10). This example of cross-level influence is a key analogy for Rescorla. When responding to worries that what’s happening at the formal level excludes content from causal relevance, Rescorla contends that “We should no more conclude that the syntactic properties do all the ‘causal work’ than we should conclude that the neural properties do all the ‘causal work.’” (2014: 201). If anything, this analogy reinforces the orthodox view that content is not involved at the computational level. Rescorla shows us something important, namely that content is causally relevant to computation. Yet he doesn’t show us that content is relevant to describing computing systems within the computational level.
We considered various ways for the computational externalist to account for miscomputation. The most promising was the second way of supplementing C. On this way, computational norms are specified by one wide structure and computational structures are individuated by a different wide structure. For this strategy to succeed, the externalist must find a kind of wide individuation that is well suited to account for (actual) computational structure. The various kinds of teleology that are on the market seem poorly suited to account for computational structure (though, some type of teleology very well may account for computational norms). Content is better suited to (help) individuate the actual behavior of a system. Nonetheless, the appeal to content adds complexity but not benefit. And even if you can find something for content to do at the computational level of description, you still have the additional work of explaining how content helps explain miscomputation.
I have not shown that externalist accounts of miscomputation are hopeless; however, I have shown that they face challenges that my individualist account does not face. For the time being, then, we have a reason to prefer individualism to externalism.
Computational externalists sometimes complain that individualism lacks the resources to individuate computational structure. I showed they were mistaken by allowing a certain kind of narrowly individuated functional structure to determine computational structure. I relied on this account of computational structure to provide the first adequate account of miscomputation. I then showed that externalists have difficulty accounting for miscomputation precisely because they lack an adequate account of computational structure. Computational externalism is the dominant position on computational individualism. This paper suggests that it shouldn’t be.
Allen, Collin. 2003. “Teleological Notions in Biology”. Stanford Encyclopedia of Philosophy. Stable URL: http://plato.stanford.edu/entries/teleology-biology/.
Bontly, Thomas. 1998. “Individualism and the Nature of Syntactic States.” British Journal for Philosophy of Science 49: 557-74.
Burge, Tyler. 1986. “Individualism and Psychology.” The Philosophical Review 95: 3-45.
Chalmers, David. 2011. “A Computational Foundation for the Study of Cognition.” Journal of Cognitive Science 12: 323-57.
Cummins, Robert. 1989. Meaning and Mental Representation. Cambridge (MA): MIT Press.
_____. 1983. The Nature of Psychological Explanation. Cambridge (MA): MIT Press.
Dewhurst, Joe. 2014. “Mechanistic Miscomputation: a Reply to Fresco and Primiero.” Philosophy and Technology 27: 495-8.
Egan, Frances. 1995. “Computation and Content.” The Philosophical Review 104: 181-203.
_____. 1992. “Individualism, Computation, and Perceptual Content.” Mind 101: 443-59.
Fodor, Jerry. 2008. LOT 2: The Language of Thought Revisited. Oxford: Oxford University Press.
_____. 1975. The Language of Thought. Cambridge (MA): Harvard University Press.
Fresco, Nir and Giuseppe Primiero. 2013. “Miscomputation.” Philosophy and Technology 26: 253-72.
Kim, Jaegwon. 2005. Physicalism, or Something Near Enough. Princeton: Princeton University Press.
Neander, Karen. 2012. “Teleological Accounts of Mental Content.” Stanford Encyclopedia of Philosophy.
Orlandi, Nico. 2014. The Innocent Eye: Why Vision is Not a Cognitive Process. New York: Oxford University Press.
Peacocke, Christopher. 1999. “Computation as Involving Content: A Response to Egan.” Mind and Language 14: 195-202.
Piccinini, Gualtiero. 2015b. Physical Computation: A Mechanist Account. Oxford: Oxford University Pres.
_____. 2015a. “Computation in Physical Systems.” Stanford Encyclopedia of Philosophy. Stable URL: http://plato.stanford.edu/entries/computation-physicalsystems/.
_____. 2008. “Computation without Representation.” Philosophical Studies 137: 205-41.
_____. 2007. “Computing Mechanisms.” Philosophy of Science 74: 501-26.
Pylyshyn, Zenon W. 1984. Computation and Cognition: Toward a Foundation for Cognitive Science. Cambridge (MA): The MIT Press.
Rescorla, Michael. 2014. “The Causal Relevance of Content to Computation.” Philosophy and Phenomenological Research 88: 173-208.
Segal, Gabriel. 1991. “Defence of a Reasonable Individualism,” Mind, 100: 485–493.
Shagrir, Oron. 2001. “Content, Computation, and Externalism.” Mind 110: 369-400.
Shapiro, Lawrence A. “A Clearer Vision.” Philosophy of Science 64: 131-53.
 This apparent fact is bemoaned by Dewhurst (2014), Fresco and Primiero (2013: 254), and Piccinini (2015b: 14, 48).
 It won’t do to insist that the term miscomputation should be used to mean malfunction of a computing system. The labels we use matter only so much. As long as what I call miscomputation is possible, an adequate account of computation needs to explain how it is possible.
 Computational individualists include Cummins (1989: 81) and Egan (1992; 1995).
 Computational externalists include Burge (1986: 28-9), Bontly (1998: 569-70), Peacocke (1999), Piccinini (2015b: 42-3), and Shapiro (1997: 141). Even Segal (1991: 492-3) is an externalist insofar as he allows the computational structure of a system to depend on its role in a larger system.
 Egan (1992; 1995) defends the conjunction of computational individualism and content externalism.
 I clarify the relation between dispositions and functions in §2.3.
 For additional constraints on digital computation, see Piccinini (2007: 508-514, 2015b: 125-34).
 Chalmers’ organizational independence is similar to Piccinini’s medium-independence. While Chalmers requires this sort of independence for many mental properties, he is committed to pancomputationalism because he doesn’t require it of computational ones (contrast Chalmers discussion of digestion on pgs 332 and 337-8).
 To determine these groupings, Shagrir (2001: 382-3) argues that we must appeal to semantically individuated tasks. Given our assumption of content externalism, Shagrir’s view entails computational externalism. But the rest of the paragraph shows that we don’t need to appeal to semantically individuated tasks after all.
 Parts tend to perform simpler computations, because systems are generally constructed so that their computing operations are somewhat efficient. It’s possible for the parts to perform more complex computations than the whole.
 You’ll save yourself some time if you take my word for it. For the most scrupulous readers, I describe such a device. Let S3 be the composition of S1 and S2. Suppose S1 is differentially sensitive to three voltage ranges: <2.5v, 2.5v-5v, and >5v. Let S1’s outputs be S2’s inputs, where S2 is only bi-stable, and thus only differentially sensitive to two voltage ranges, 0-5v and >5v. Whenever S2’s input is 0-5v, it outputs 0-5v. Whenever its input is >5v, it outputs >5v. In such a case, a system S3, which is solely composed of S1 and S2 will only be differentially sensitive to two ranges, 0-5v and >5v. (Assuming that Digital Perseverance is false, S3 is an and-gate. As typical with mechanistic explanation, we can explain how S3 has such a computational capacity by breaking S3 into its component parts. It has the and-gate capacity because it has a component, S1, which performs an “averaging task of sorts” (cf. Piccinini 2008: 228) and then having a component, S2, that performs the identity function on S1’s outputs.)
 Rival accounts of computation can disagree about what counts as the computational input to the system, i.e., what counts as a digit, but these differences will reduce to disagreements over the computational structure of the system. This is what happened in §1 when I very briefly discussed Shagrir’s challenge for individuating digits.
 See Allen (2003) for some a survey of the standard accounts for biological organisms.
 Bontly (1998: 569-70) and Piccinini (2015b: 43) claim that teleology determines computational structure, so they opt for either A or B.
 Why think that S3 does doesn’t compute at all? Computation is a certain kind of transition between inputs and outputs; no outputs, no computation. S3 makes as if to compute but fails to compute. Consider an analogy. Suppose that, as I’m winding up to throw the ball to my kid, I’m distracted by the alien warships that have descended into view. While my arm moves forward in a lazy throwing motion, I never let go of the ball. I didn’t throw the ball. I made as if to throw. Just as throwing the ball requires the ball to leave my hand, computing requires the system to output a (complete) string of digits.
 You can replace teleology with your preferred account of computational norms. It won’t matter.
 Those who appeal to functional explanations often insist that properly functioning systems have a kind of priority over malfunctioning systems (e.g., Piccinini 2015b: 109-10). Can this priority provide a non-ad hoc account of miscomputation? Not by itself. First, the resulting view must explain how it is possible for a damaged component to compute a function that it is not supposed to compute. By itself, the priority of proper function doesn’t tell us how this is possible. Second, the priority of teleologically or normatively loaded explanations does not guarantee that a single kind of explanation, computational explanation, is wide in properly functioning systems and narrow in malfunctioning systems. There might be two different kinds of explanations and computational explanation is the one that is narrow and lacks priority.
 My response to Shagrir in §1 is also relevant. Between my response to Shagrir above and my response to Rescorla below, this paper responds to two very different arguments that content is needed to account for the computational structure of the system.
 The context makes it clear that Rescorla isn’t denying that formal manipulation is relevant to computational transitions; what he’s denying is that formal manipulation is all that’s relevant.
 This paper is in memory of Jonathan McKeown-Green (1963-2015), who helped me think through the inchoate ideas that ultimately led to this paper. Matt Haug, Gualtiero Piccinini, Kevin Sharpe, and an anonymous referee provided helpful comments on earlier drafts. Those earlier drafts were written thanks to a William and Mary Summer Research Grant. I owe these people and William and Mary my gratitude.