Miscomputing Individualistically: It’s the Only Way to Do It

Chris Tucker (College of William and Mary)

[Jump to comments]

Introduction

Just as theory of representation is deficient if it can’t explain how misrepresentation is possible, a theory computation is deficient if it can’t explain how miscomputation is possible.  You might expect, then, that philosophers of computation have well-worked out theories of miscomputation.  But you’d be wrong.  They have generally ignored miscomputation.[1]  Worse still, when it hasn’t been ignored, it’s been conflated with malfunction of a computing mechanism.  Piccinini claims that “if the [computing] mechanism malfunctions, a miscomputation occurs” (Piccinni 2015a, sec 2.5; 2015b: 122).  Fresco and Primiero make a similar mistake: “When a [computing] system fails to accomplish the purpose for which it was designed, a miscomputation can be identified” (2013: 257).

Miscomputation is a special kind of malfunction.  If the battery dies, a system may fail to compute what it is supposed to compute.  But it’s not miscomputing, because it’s not computing at all.  Just as something doesn’t misrepresent unless it represents, something doesn’t miscompute unless it computes.  To miscompute is to compute in a way that violates a computational norm.[2]  Consequently, an adequate account of miscomputation requires an account of what the system is computing when the system is violating the relevant computational norms.  I argue that providing this account is easy for the computational individualist, but hard for the computational externalist.

Computational individualism is the claim that the computations a system performs are individuated narrowly.  In other words, if we hold the laws of nature fixed, then a system’s computational structure supervenes on its physical structure.  This view entails that neither a system’s environment nor its role in some larger system affects its computational structure.[3]  Its denial is called computational externalism.  Computational externalism is the dominant position on computational individuation,[4] but do not confuse it with content externalism.  I assume, for the sake of the paper, that content externalism is true.  It does not follow that computational externalism is true.  Causal connections to water can affect the content of our thoughts, even if they can’t affect the computational structure that underwrites those thoughts.[5]

In §1, I briefly present Piccinini’s mechanistic theory of computation and argue that it can be combined with individualism.  In §2, I show that this individualist, mechanistic theory easily accounts for miscomputation.  In §3, I argue that externalism has difficulty accounting for miscomputation.  

 

1. Computational Structures and Functional Structures

I focus on digital computation.  Oversimplifying, a digital computing system is an input-output device in which the relevant inputs and outputs are strings of digits.  A digit is a state of the system.  In simple systems, digits are often just electrical charges, where charges of different voltages can count as different digits.  A string of digits is an ordered list of digits.  For example, a string of digits in a system might be a series of electrical charges.  A system computes when the formal properties of the input string “lead” the system to produce a certain output string, e.g., a different series of electrical charges.  In the abstract, a computational structure is a complete mapping from the possible input strings to the possible output strings.  A system has or implements a certain computational structure when the structure, or mapping, is a correct description of the system’s (actual and counterfactual) behavior.  That much is relatively uncontroversial.  In the rest of this section, I present a controversial theory of (when a system implements a) computational structure and argue that it can be combined computational individualism.

Piccinini (2007; 2008; 2015b) has defended a mechanistic theory of computation, which holds that a system’s computational structure is determined by a specific kind of dispositional or functional structure.[6]  On this view, to say that a system implements a certain computational structure is not merely to describe the system: it is to explain, at a certain level of abstraction, why the system is doing what it’s doing (or would do were it given a certain input).  The relevant type of explanation is a special form of dispositional explanation.  We explain what the salt is doing while submerged, in part, by pointing out that it is soluble, that it has a certain dispositional structure.  When we explain a system’s behavior by appealing to a dispositional structure that counts as computational, we provide a computational explanation of the system’s behavior.  Let the definitive list be the complete list specifying which properties are necessary and sufficient for a dispositional structure to be computational and which properties further individuate a system’s computational structure.  The definitive list, then, is the correct and complete account of which functional structures count as computational and which differences between functional structures make a computational difference.

I’ll pretend that Piccinini has given us the definitive list, but even he admits that he hasn’t done so (2015: 120).  He has, however, made substantial progress.  One property on the definitive list is medium-independence.[7]  To count as a computing system, a system’s behavior must be explicable at a level of abstraction that makes no reference to the media in which the behavior is carried out.  Medium-independence is stronger than multiple realizability.  There is more than one way to realize removing corks from wine bottles, but this behavior is necessarily performed on certain kind of media, namely wine bottles and corks.  This behavior is multiply realizable but not medium independent (Piccinini 2015b: 122-3).  Medium independence imposes a significant constraint on which systems compute.  Most mechanisms fail to meet this condition (Piccinini 2015: 146).  The behavior of the digestive system is, for example, “quintessentially medium-dependent”: its processes are defined in terms of “specific chemical changes to specific families of molecules” (147; cf. Chalmers 2011: 337-8).  By requiring medium-independence, Piccinini’s mechanistic theory avoids pancomputationalism (the idea that everything computes).  This is a substantial achievement.  Some otherwise plausible accounts, such as that of Chalmers’ (2011), fail to do so.[8]

The functional structure that meets the conditions of the definitive list does more than identify which systems compute.  It also reveals the computational structure of computing systems.  Shagrir (2001) observes that voltage values could be grouped in many ways.  We could treat any voltage up to five volts as a single digit.  Alternatively, we could treat any voltage less than 2.5v as single digit and any voltage from 2.5-5v as a distinct digit.  An adequate account of computational structure must determine which groupings reflect the computational structure of the system.[9]  The mechanistic theory claims that the correct grouping is determined by “the components of the mechanism, their functional properties, and their organization” (Piccinini 2015b: 132).  In other words, functional significance determines computational significance.  If a system’s outputs aren’t differentially sensitive to input voltages of ≤5v, then the computational structure of a system treats those voltages as a single digit rather than two.  In sum, to attribute a certain computational structure to a system is to attribute a certain functional structure to the system that explains its actual and counterfactual behavior.

This sketchy presentation of the mechanistic theory is enough for my purposes.  You can always consult Piccinini’s work for the details.  I endorse Piccinini’s theory save two differences.  I discuss the second in §2.3.  The first difference concerns how the relevant sort of functional structure is individuated.  He claims that it is individuated widely, and I claim it is individuated narrowly.  He’s a computational externalist, and I’m an individualist.  Suppose S is a computing component of some larger system S.  Piccinini assumes Digital Perseverance: necessarily, feature F counts as a distinct digit for S only if F counts as a distinct digit for S.  The computational structure of the whole constrains how digits are individuated for the part (2008: 229, 2015b: 41; cf. Bontly 1998: 570, and Segal 1991: 492-3).  If Digital Perseverance is true, then so is computational externalism.  For, given Digital Perseverance, the computational structure of a part can’t supervene on its physical structure, though it might supervene on the physical structure of the whole mechanism of which it is a part (cf. Segal 1991: 492-3).

I reject Digital Perseverance.  Where Piccinini sees the computational structure of the larger mechanism imposing constraints on the computational structure of the component part, I see computational significance getting lost in composition.  First, it is uncontroversial that a complete and correct computational description of a whole device and each of its parts is compatible with the parts performing different, usually simpler,[10] computations than the whole mechanism (cf. Egan 1995: 192).  If the part and whole can perform different computations, why can’t the part perform computations on more digits than the whole?  Why must the whole mechanism care about everything each of its parts cares about?  Without an answer to these questions, there is no reason to deny that parts can compute over more digits than their wholes.

Second, it’s possible to have a computing device, such that one of its computing parts is differentially sensitive to three voltage ranges when the device as a whole is differentially sensitive only to two voltage ranges.[11]  If you think that computational structures track functional significance as Piccinini and I do, then you have some reason to treat the larger device as computing over fewer digits than one of its parts.

We can, therefore, tentatively reject Digital Perseverance and combine computational individualism with a mechanistic theory of computational structure.  This tentative combination will prove to be a smart move.  In section 2, I will show how easily this combination accounts for miscomputation.  In section 3, I will show that computational externalism struggles to account for miscomputation, precisely because of its externalism.

2. Miscomputing Individualistically

2.1 Miscomputation

We can work our way toward understanding miscomputation by considering some analogies. To misbehave is to behave in a way that violates some relevant norm (e.g., the norms of rationality, morality, or etiquette).  To miscommunicate is to communicate in a way that violates some relevant norm (what is communicated is not what was intended to be communicated).  To misrepresent is to represent in a way that violates some relevant norm (e.g., truth).  You get the picture.  To miscompute is to compute in a way that violates some relevant norm, more specifically, a norm for what the system should be computing.  It is controversial what gives rise to computational norms, though it is popular to appeal to the system’s function or a designer’s intentions.  I focus on the case in which a system miscomputes by computing one function when it should have computed a distinct function.  This is the type of miscomputation that I think the externalist has the hardest time making room for. I do think, however, that a system may miscompute in other ways (e.g., computing exactly one function when it should have been computing multiple functions in parallel).

A miscomputation, so understood, is a special kind of malfunction.  It is special in two ways.  First, while malfunctions require normativity at some level of description, miscomputation involves normativity, more specifically, at the computational level.  Where there is no computational behavior that the system should be performing, there is no miscomputation.  Second, not all computational failures count as miscomputations.  Some such failures are merely mechanical.  If the battery dies, the computing system won’t compute anything at all.  And if it doesn’t compute anything at all, it’s not miscomputing, just as a diagram doesn’t misrepresent unless it represents something.    Or suppose that the battery, fan, and solar panels function properly, but the component that should produce the computationally relevant inputs is completely broken.  The device consequently fails to compute anything at all, so it’s not miscomputing.  To miscompute is to compute.  As I mentioned in the introduction, this point is often neglected, and its neglect explains why no one has adequately accounted for miscomputation thus far.

An adequate account of miscomputation, so understood, involves at least three components: an account of computational behavior (what computation, if any, a system is performing); an account of computational norms (what computation(s) the system should be performing); and an explanation of how these two accounts together make it possible for a system to compute in a way it should not be computing.  The latter explanation may be as trivial as pointing out that, in circumstances C, the account of computational behavior entails that the system computes f1 when the account of computational norms says that what should be computed is a distinct function f2.  Computational individualists can provide the requisite components without difficulty; externalists cannot.

2.2 Miscomputation Explained

Before presenting the details of my own account, we need to take a step back and appreciate the connection between computational structure and computational behavior.  In what follows, I make two simplifying assumptions.  First, I assume that a system always manifests its dispositions (and so computational structure) when triggered by the relevant input conditions.  This allows us to ignore various complications, such as the possibility of masking, performance error, etc.  Witches and protective Styrofoam can mask a vase’s fragility so that the vase won’t break when dropped.  We set aside such possibilities.  We assume that, when dropped, a vase will manifest it’s fragility by breaking.  A computing system likewise performs the computations that manifest its computational structure.  Second, I assume that all computation is deductive or non-probabilistic.  When such a computing system receives a computational input and manifests its computational structure, it is guaranteed to produce a specific computational output.  These two assumptions are harmless.  Any adequate account of miscomputation will allow a computing system to miscompute when it manifests a deductive computational structure.  They simplify our discussion by letting us assume that any difference in computational behavior must be explained by a difference in computational structure.

Recall from section 1 that, in the abstract, a computational structure is a complete mapping of computational inputs to computational outputs.  A physical system has a certain computational structure iff the structure/mapping counts as a correct description of the system’s actual and counterfactual behavior.  The computational structure of a system tells us what the system would do were it to receive a given computational input.  Perhaps when given string 0,1 as an input, it outputs 1,1.  When you know the actual inputs to the system (and you assume that a deductive computational structure is manifested), the computational structure tells you the actual computational behavior of the system.  In other words, computational structure + computational inputs = computational behavior.

Any account of computational behavior will have this same basic structure.  The main difference between rival accounts of computational behavior will be their respective accounts of computational structure.[12]  When circumstances remain fixed, a difference in computational behavior requires a difference in computational structure.

My account of a system’s computational behavior begins, naturally enough, with my account of computational structures.  Recall that, on my view, a system’s computational structure is determined by its internally individuated functional structure (that satisfies the definitive list).  To determine what computations are being performed by the system, just plug in the computational inputs (i.e., those states of the system that play the relevant kind of functional role in the behavior of the system).

I deny, however, that narrowly individuated functional structure determines the computational norms for the system.  The norms that guide a system’s computational behavior are given, at least in part, by something external to the system itself, e.g., the evolutionary history of the system, the intentions of a designer, the role that system plays in some even larger system, etc.  To account for miscomputations, just pick your favorite account of the normativity associated with functional roles.  Any of the standard accounts will do,[13] and so will Piccinini’s (2015b, ch 6).  Any of them are sufficient for my purposes, because they individuate norms widely.  Since computational behavior is individuated narrowly and computational norms are individuated widely, it’s easy to see that a system can compute a function that it isn’t supposed to compute.

Suppose, for illustration, that the normativity for a manufactured system can be supplied by the intentions of the designer.  If so, then miscomputations can arise because of design error (cf. Piccinini 2015: 149).  I might intend for system S to compute function f1 but mistakenly construct it so that it computes function f2 instead.  To compute f1, perhaps the system needs to be differentially sensitive to three voltage ranges when its current construction makes it differentially sensitive to only two.  In such a case, S would be miscomputing.

Or suppose that S is a computing component of some larger biological system S.  In order for S to make its essential contribution to the biological fitness of S (or whatever determines S’s teleological function), it needs to compute function f3.  But a component of S is damaged (S has a brain lesion perhaps), so it computes f4 instead.  S is miscomputing.  f4 is the computation that actually describes the behavior of S, when it should have behaved so as to be correctly described by f3.  Again, given my account of computational structure and any standard account of computational norms, S would be miscomputing.

2.3. Two Senses of Functional Structure

To better understand how my account of miscomputation works, we need to disambiguate two senses of function and functional structure.  Functions are special kinds of dispositions.  Roughly, a component has a disposition to X in circumstances C iff it tends to X in C.  Hearts are disposed to pump blood when they receive the relevant sort of electrical charges and are connected to blood vessels in the relevant sort of way and so on.  Hearts are also disposed to make noise in those same circumstances.  Yet not all dispositions have the same sort of explanatory significance within a larger system.  A component’s disposition is a function of that component iff the disposition is needed, at a certain level of abstraction, to account for the dispositions and behavior of the overall system (cf. Cummins 1983: 28-9).  At the biological level of abstraction, we need to appeal to the heart’s pumping blood—but not its making noise—in order to account for the dispositions and behavior of the circulatory system.  Hence, all functions are dispositions, but not all dispositions are functions.  To be a function is to be a disposition that plays an explanatory role in a larger system.

A functional structure of a system represents, at a certain level of abstraction, how the dispositions of each component underwrite the dispositions (powers, capacities) of the overall system.  When the dispositions of the various components are manifested, they work together to explain the behavior of the overall system.  The functional structure of the circulatory system would not only represent the heart’s contribution, but also the contributions of blood and blood vessels, to the dispositions and operation of the circulatory system.  Functional structures track how the functions of the components work together to account for the dispositions and behavior of the overall system.

Functional structures, so understood, are purely descriptive.  They describe, at a certain level of abstraction, how the dispositions of the components actually work together to explain the system’s dispositions and behaviors.  There is no further claim that this is how the various components should work (together).  Perhaps it is and perhaps it isn’t.  When a system functions improperly—when actual and proper function come apart—the purely descriptive notion tracks actual function.

There is also a normatively loaded sense of function and functional structure.  To say that the function of the heart is to pump blood, in this sense, is to say that the heart is supposed to pump blood.  This sense of function doesn’t track how things actually work but how they should work.  The normatively loaded sense of functional structure represents, at a certain level of abstraction, the dispositions the components should have and how those dispositions should work together to underwrite the dispositions the system should have.

The purely descriptive and normatively loaded senses come together in properly functioning systems.  If a system is functioning properly, a system’s purely descriptive and normatively loaded functional structures are identical.

In contrast, the purely descriptive and normatively loaded senses come apart in malfunctioning systems.  If a system malfunctions, one functional structure will describe the actual organization and operation of the system and a distinct structure will describe how the system should be organized and how it should operate.  Malfunction is possible only when actual (purely descriptive) functioning deviates from proper (normatively loaded) functioning.

The purely descriptive/normatively loaded distinction is not the narrow/wide distinction.  The latter distinction concerns whether, at a certain level of abstraction, a system’s environment or its role in a larger system can affect the individuation of its current structure and behavior.  The former concerns whether, at a certain level of abstraction, a certain structure and behavior are (pure) descriptions of or norms for a given system. It is ordinarily assumed that norms must be individuated widely, but, in principle, purely descriptive functional structure could be wide or narrow.  My view of computational structures is that the purely descriptive structure is narrow and the normatively loaded is wide.  In other words, I think computational behavior is narrow and computational norms are wide.  At other levels of abstraction, both behavior and norms may be wide.  One externally individuated functional structure might provide the norms for our mental structure and behaviors (e.g., the norms of rationality), and—given our assumption of content externalism—a distinct widely individuated functional structure would specify our actual mental structures and behavior.

I identified, in §1, one difference between my mechanistic theory and that of Piccinini: my version appeals to narrow functional structure and his appeals to wide functional structure.  A second difference is that my version appeals to purely descriptive structures and his to normatively loaded ones (cf., e.g., Piccinini 2015: 113-4, 151).  I claim, then, that computational structures are determined by a specific type of narrowly individuated, purely descriptive functional structure.  The specific type is the type that satisfies the definitive list, the list of properties that distinguish computational functional structure from other kinds of functional structure.

Narrowly individuated functional structure is, of course, not a good candidate to account for normatively loaded functional structure, or more specifically, the computational norms of a computing system.  That is why, when I discuss the norms for a device—computational or otherwise—I follow just about everyone else in asserting that the norms are individuated widely.  Miscomputation is made possible on my account, precisely because there is a gap between the narrow individuation of (purely descriptive) computational structure and the wide individuation of (normatively loaded) computational norms.  A system miscomputes when its behavior manifests a narrow computational structure that the widely individuated norms say that it should not have.

My account of computational structure does impose one constraint on computational norms that I should mention.  Recall that, in properly functioning systems, a system’s purely descriptive functional structure is identical to its normatively loaded functional structure.  In properly functioning computing systems, the system’s actual computational structure will be identical to the computational structure that it should have.  Since (actual) computational structures are narrowly individuated, the computational structure that a system should have must be specifiable in narrowly individuated terms.

Contrast the following two norms for a given system S:

Norm specified in wide terms: when the input voltage is >5v, output the voltage that will allow the larger system to operate as an and-gate.

Norm specified in narrow terms: when the input voltage is >5v, output >5v.

The first norm is specified in wide terms, because it references the larger system of which S is actually a component.  The second norm is specified in narrow terms, because it mentions behavior that can be individuated internally to the system.  There is no reference, explicitly or implicitly, to things beyond the system itself.  Essentially, the internally individuated structure a system should have is whatever internally individuated structure properly functioning versions of that system do have.  I say computational norms are widely individuated not because they are specified in terms that reference things beyond the internally individuated states and structure of the system; rather, computational norms are widely individuated because what makes something the proper internal structure for a system is determined by things beyond the internally individuated states and structure of the system (e.g., what it takes for the system to survive in its environment, the goal of some designer, etc.).

3. Miscomputation and Computational Externalism

You have just seen how easily computational individualists can accommodate miscomputation.  Since they can endorse internally individuated computational behavior while holding onto externally individuated computational norms, it is no mystery how a system can compute a function that it shouldn’t.  In this section, we see that computational externalism—the claim that computational structures are individuated widely—makes it much more difficult to explain how miscomputation is possible.

3.1. Some Options for the Externalist

Recall that, to account for miscomputation, one must do more than show that computing systems can malfunction.  If a computing system is so broken that it fails to compute at all, it is malfunctioning but not miscomputing.  An account of miscomputation must have at least three parts: (i) an account of computational behavior; (ii) an account of computational norms; and (iii) an account of how it’s possible for a system to compute something it isn’t supposed to be computing.  (iii) imposes a constraint on one’s account of computational behavior, namely that it must be possible for malfunctioning systems to compute.  Otherwise, it will never be the case that a system is computing in a way that it shouldn’t.

Piccinini provides, as far as I’m aware, the only extended philosophical discussion of miscomputation.  When Piccinini (2015: 148-50) “explains” how his mechanistic theory of computation accounts for miscomputation, he fails to explicitly address any of (i)-(iii).  He does describe several different kinds of miscomputation; however, he never even attempts to show that his mechanistic theory makes those kinds of miscomputation possible.  The problem, as I mentioned in the introduction, is that Piccinini didn’t properly distinguish malfunction and miscomputation.  He shows, at most, that a computational system can malfunction.  Yet he fails to account for miscomputation, a very specific type of malfunction.

In chapter 6, Piccinini provides an account of wide functional structure that is teleological in nature: to be a function is to make a stable contribution to attaining certain goals.  Roughly, for organisms, the goal would be survival and, for artifacts, the goal would be some goal of a designer.  This kind of functional structure is wide because these goals are determined, in part, by something beyond the system itself.  Let us say that a system’s teleologically individuated computational structure is the computational structure the system needs in order to contribute, in a stable way, to the relevant goals of the system or its designer.  It is natural to suppose that the teleologically individuated computational structure plays an important role, for Piccinini, in accounting for miscomputations.  Here are some options.

  1. A system’s computational structure and computational norms are both necessarily identical to the system’s teleologically individuated computational structure.
  2. A system’s computational structure—but not its computational norms—is necessarily identical to the system’s teleologically individuated computational structure.
  3. A system’s computational norms—but not its computational structure—is necessarily identical to its teleologically individuated computational system.

Option A makes miscomputation impossible.  If both computational structure and computational norms are identical to the system’s teleologically individuated computational structure, then a system always has the computational structure it should have.  Yet a system doesn’t malfunction unless it has a computational structure it shouldn’t; otherwise, the system will always compute whatever it is supposed to.  And where there’s no malfunction, there’s no miscomputation.  I tentatively think that Piccinini (2015b) committed himself to option A.  Even if I’m right about this, my job is not done.  To show that miscomputation poses a serious problem for Piccinini (and the externalist, more generally), we need to show that Piccinini can’t easily account for miscomputation by choosing one of the other options.

Option B also seems hopeless.  The problem for B is also a problem for A: both A and B claim that teleology determines computational structure.[14]  Given the same inputs, A and B claim that there is no change in computational structure without a change in teleological function.  The result is disaster.  Suppose that S1 and S2 and S3 are all computing systems of kind K, and their teleological function requires them to perform a certain computation.  In particular, they are required to output O1 (say, ≤5v) when they receive input I1.  While S1 fulfills its teleological function, S2 and S3 do not (they’re broken).  When given I1, S1 outputs O1 (≤5 volts), S2 outputs a distinct output O2 (9 volts), and S3 is so broken that it doesn’t output anything at all.  S1 and S2 seem to be computing distinct functions, and S3 doesn’t seem to be computing at all.[15]  It seems absurd to claim that S1, S2, and S3 are each computing the same function.  Yet this absurdity is just what A and B entail.  Since S1, S2, and S3 share the same teleological function, they have the same computational structure.  As explained in 2.2 above, the same computational structure plus the same computational inputs guarantees the same computational behavior.

The moral is that the correct account of computational structure must be at least partly independent of the system’s teleological function (that satisfies the definitive list).  It must be possible, in other words, for damage to a component to change the computational structure of the component without changing its teleological function.  In retrospect, this shouldn’t surprise us.  Computational structure and behavior track how the system is actually structured and actually operating, not how it should be structured and should be operating.  Any account of computational behavior that relies on teleological norms is taking a normatively loaded functional structure and trying to make it play a purely descriptive role.  We shouldn’t be surprised that there’s a poor fit.

I should stress that teleological theories of content have nothing to fear from this objection, precisely because they use teleology to play a normatively loaded role and only use it for such a role.  Crudely put, such theories hold that a mental state M represents C when C is what should cause C, where the relevant teleology determines what should cause C (cf., Neander 2012, sec 3).  This normativity helps account for misrepresentation.  A cleverly disguised mule might be the actual cause of my belief that there’s a zebra.  This is a case of misrepresentation (a false belief), because what caused my belief (the mule) is not what should cause it (a zebra).  We should learn from this account of misrepresentation.  Just as we can vary the actual cause of M without varying M’s teleology, we can vary the actual computational structure of a system without varying its teleology.

My individualist, mechanistic theory avoids my objection to A and B. The correct account of computational structure needs to let computational structures vary with changes to the actual composition of the system even when those changes don’t yield a difference in teleology.  By tying computational structures to narrowly individuated functional structures (that satisfy the definitive list), we accomplish precisely that.  We allow damage to a system to affect its computational behavior without affecting its teleological function.  Just as some hearts beat in ways that fail to fulfill their teleological functions, some computing systems compute in ways that fail to fulfill their teleological functions.

Option C is the only externalist option that has a hope of accounting for miscomputation.  It uses teleology for what it is well-suited to do and only for what it is well-suited to do.  On this option, teleology provides an account of computational norms,[16] but not an account of computational behavior.  This is as it should be.  The teleology for computing systems provides a much better account of their biological/computational norms than actual behavior. As it stands, though, C fails to provide a complete account of miscomputation.  It must be supplemented by an account of a system’s (actual) computational structure and behavior.  There are at least two ways of supplementing C.

The first begins by claiming that the computational structure of a malfunctioning computing system is necessarily identical to its narrowly individuated functional structure.  This first way is able to explain miscomputation in roughly the same way I did: it is the gap between wide norms and narrow computational structure that makes miscomputation possible.  But does the resulting view remain an externalist view?  Or does it require conversion to individualism?

It depends.  So far the first way specifies only that the computational structure of malfunctioning systems is identical to their narrow functional structure.  If the computational structure of a properly functioning system is also identical to its narrow functional structure, then the computational structure of every computing system is narrowly individuated.  The resulting view just is the individualist account of computational individuation I developed earlier in the paper.  The first way of supplementing C remains a genuine externalist position only if it claims that the computational structure of properly functioning systems is wide.  Such a position seems ad hoc.  It holds that the computational structure of malfunctioning systems is narrow and that of properly functioning systems is wide.  But why would the scope of the individuation vary according to whether a system functions properly or not?  In short, the problem with the first way of supplementing C is that it is either ad hoc or not a genuine alternative to my individualist account of miscomputation.[17]

The second way of supplementing C is, perhaps, more promising.  Recall that C holds that computational norms are determined by a system’s teleologically individuated computational structure.  The second way of supplementing C claims that a system’s computational structure—whether properly functioning or not—is determined by some other sort of wide structure.

If the content of computational states were required to individuate computational structure, then this second way would have some promise.  The kinds of causal connections that partly individuate content seem better suited to determine actual computational structure and behavior than do teleological functions.  After all, given content externalism, these causal connections are already employed to partly determine actual behavior at the mental level of abstraction.  Perhaps these causal connections also partly determine the behavior of the system at the computational level of abstraction too.  In the good ol’ days, of course, the rallying cry of computationalists was No computation without representation! (Fodor 1975: 34; Pylyshyn 1984: 62).  Rallies are exciting.  Ra ra and all that.  Yet I think it’s better to leave representation out of the individuation of computational structure.  The next sub-section explains why.

3.2. Content to the Rescue?

Prima facie, my individualist story about computational structure will be simpler than any externalist account that employs content.  Theories of (wide) content generally individuate content by appealing to both causal relations with the environment and the functional structure of the representing system.  That they appeal to the latter can be easy to forget, because the debate about content externalism already presupposes that we are talking about thoughts and representations.  Tigers cause my hair to stand up on its head, and they cause my belief that the tigers are coming to get me.  The standing of my hair might track the presence of tigers, but it does not represent them.  On the other hand, my belief does represent tigers.  What explains this representational difference?  The answer is that only beliefs play the right kind of functional role within the system.  It’s controversial, of course, what sort of functional role is necessary for representation, but mere tracking is not enough.  (For an excellent discussion of representation in the context of perception, see Orlandi 2014, ch 3.)

Semantic accounts of computation need functional structure in a second way.  The appeal to content is best suited to individuate computational states.  Individuating computational states is necessary but not sufficient for individuating computational transitions, or computations.  If computations C1 and C2 consist in the same input and output states, it does not follow that they are identical.  The computational identity of their transitions is “bound together” with the complete mapping (or rule) that correctly describes the behavior of the system.  C1 and C2 will be distinct computations if they manifest distinct computational structures.  Analogously, suppose that systems S1 and S2 receive 2,2 as inputs and then output 4.  It doesn’t follow that they are performing the same mathematical function, because S1 might be performing addition and S2 might be performing multiplication.  It doesn’t follow because the complete mapping from inputs to outputs that describes S1’s (actual and counterfactual) behavior might be distinct from the complete mapping that describes S2’s (actual and counterfactual) behavior.

If you want computational individuation to be an objective, non-vacuous affair (if you don’t, just play along), then here’s a question: what determines which mapping/rule is being followed in a given computation?  The only answers that I’m aware of ultimately appeal to some form of functional structure.  This includes accounts, such as Chalmer’s (2011) and Piccinini (2015b), that emphasize the functional or dispositional feature of their views.  Yet even classic proponents of semantic accounts, such as Pylyshyn (1984: 66-9) and Fodor (e.g., 2008: 77), demand a certain kind of causal structure for a transition to even count as a computation (they endorse the “nearly ubiquitous view” discussed below).  When individuating computations, the appeal to functional structure appears inevitable.

If we must appeal to functional structure to individuate computations, a theory is prima facie less simple if it appeals to content as well.  In order to justify the appeal to content, then, the externalist has an extra burden that the individualist does not have: she must show that the appeal to content is worth the extra complexity.  I don’t think this demand can be met.  There is just nothing for content to do at the computational level of description.  I can’t provide a complete defense of this claim here, but Chalmers (2011: 334-5) and especially Piccinini (2008; 2015b, ch 3, 7) have made many of the key points.[18]  Here I will just briefly respond to one recent argument that content is relevant to a system’s computational structure.

The “nearly ubiquitous view” of (digital) computational description is that computational transitions consist solely in the “formal” or syntactic manipulation of strings of digits (Rescorla 2014: 175).  On this view, at the computational level of abstraction, the only properties of input strings that are causally relevant to which output string is produced are their formal properties—such as, which digits appear in the string, how many times, and in what order.  Since the formal properties of strings can supervene on the narrowly individuated functional structure of a physical system, this formal conception of computation is compatible with the version of computational individualism that I’ve defended in this paper.  One strategy for defending externalism is to challenge this formal conception and to argue that content is required to fully capture, at the computational level of description, why the system transitioned from one string to another.  Rescorla attempts to make good on this strategy.

Rescorla (2014: 175) claims that “I reject the nearly ubiquitous view that digital computation consists [solely[19]] in ‘formal’ syntactic manipulation.”   I do think that Rescorla establishes, quite forcefully, something interesting and worthwhile.  What he shows is that, given Woodward’s interventionist view of causal relevance, the representational properties of strings are causally relevant to which string a system outputs.  That’s interesting, and it helps to buttress the computational theory of mind from epiphenomenalism; however, it’s not tantamount to taking down the formal conception of computational description.  To do that, Rescorla needs to establish the additional thesis that the causal relevance of representational properties occurs within the computational level.

As Rescorla recognizes (e.g., 185), just because one factor casually influences another, it doesn’t follow that the influence occurs within a single level of description.  For example, given interventionism, Kim’s (2005, ch 2) famous causal exclusion argument fails.  That is, mental content can causally influence what’s happening at the neural level, even though the mental level is thought to be realized by (and so is distinct from) the neural level (cf. Rescorla 2014, secs 6.1 and 10).  This example of cross-level influence is a key analogy for Rescorla.  When responding to worries that what’s happening at the formal level excludes content from causal relevance, Rescorla contends that “We should no more conclude that the syntactic properties do all the ‘causal work’ than we should conclude that the neural properties do all the ‘causal work.’” (2014: 201).  If anything, this analogy reinforces the orthodox view that content is not involved at the computational level.  Rescorla shows us something important, namely that content is causally relevant to computation.  Yet he doesn’t show us that content is relevant to describing computing systems within the computational level.

We considered various ways for the computational externalist to account for miscomputation.  The most promising was the second way of supplementing C.  On this way, computational norms are specified by one wide structure and computational structures are individuated by a different wide structure.  For this strategy to succeed, the externalist must find a kind of wide individuation that is well suited to account for (actual) computational structure.  The various kinds of teleology that are on the market seem poorly suited to account for computational structure (though, some type of teleology very well may account for computational norms).  Content is better suited to (help) individuate the actual behavior of a system.  Nonetheless, the appeal to content adds complexity but not benefit.  And even if you can find something for content to do at the computational level of description, you still have the additional work of explaining how content helps explain miscomputation.

I have not shown that externalist accounts of miscomputation are hopeless; however, I have shown that they face challenges that my individualist account does not face.  For the time being, then, we have a reason to prefer individualism to externalism.

 

Conclusion

Computational externalists sometimes complain that individualism lacks the resources to individuate computational structure.  I showed they were mistaken by allowing a certain kind of narrowly individuated functional structure to determine computational structure.  I relied on this account of computational structure to provide the first adequate account of miscomputation.  I then showed that externalists have difficulty accounting for miscomputation precisely because they lack an adequate account of computational structure.  Computational externalism is the dominant position on computational individualism.  This paper suggests that it shouldn’t be.[20]

 

 

References

Allen, Collin. 2003. “Teleological Notions in Biology”. Stanford Encyclopedia of Philosophy. Stable URL: http://plato.stanford.edu/entries/teleology-biology/.

Bontly, Thomas. 1998. “Individualism and the Nature of Syntactic States.” British Journal for Philosophy of Science 49: 557-74.

Burge, Tyler. 1986. “Individualism and Psychology.” The Philosophical Review 95: 3-45.

Chalmers, David. 2011. “A Computational Foundation for the Study of Cognition.” Journal of Cognitive Science 12: 323-57.

Cummins, Robert. 1989. Meaning and Mental Representation. Cambridge (MA): MIT Press.

_____. 1983. The Nature of Psychological Explanation. Cambridge (MA): MIT Press.

Dewhurst, Joe. 2014. “Mechanistic Miscomputation: a Reply to Fresco and Primiero.” Philosophy and Technology 27: 495-8.

Egan, Frances. 1995. “Computation and Content.” The Philosophical Review 104: 181-203.

_____. 1992. “Individualism, Computation, and Perceptual Content.” Mind 101: 443-59.

Fodor, Jerry. 2008. LOT 2: The Language of Thought Revisited. Oxford: Oxford University Press.

_____. 1975. The Language of Thought. Cambridge (MA): Harvard University Press.

Fresco, Nir and Giuseppe Primiero. 2013. “Miscomputation.” Philosophy and Technology 26: 253-72.

Kim, Jaegwon. 2005. Physicalism, or Something Near Enough. Princeton: Princeton University Press.

Neander, Karen. 2012. “Teleological Accounts of Mental Content.” Stanford Encyclopedia of Philosophy.

Orlandi, Nico. 2014. The Innocent Eye: Why Vision is Not a Cognitive Process. New York: Oxford University Press.

Peacocke, Christopher. 1999. “Computation as Involving Content: A Response to Egan.” Mind and Language 14: 195-202.

Piccinini, Gualtiero. 2015b. Physical Computation: A Mechanist Account. Oxford: Oxford University Pres.

_____. 2015a. “Computation in Physical Systems.” Stanford Encyclopedia of Philosophy. Stable URL: http://plato.stanford.edu/entries/computation-physicalsystems/.

_____. 2008. “Computation without Representation.” Philosophical Studies 137: 205-41.

_____. 2007. “Computing Mechanisms.” Philosophy of Science 74: 501-26.

Pylyshyn, Zenon W. 1984. Computation and Cognition: Toward a Foundation for Cognitive Science. Cambridge (MA): The MIT Press.

Rescorla, Michael. 2014. “The Causal Relevance of Content to Computation.” Philosophy and Phenomenological Research 88: 173-208.

Segal, Gabriel. 1991. “Defence of a Reasonable Individualism,” Mind, 100: 485–493.

Shagrir, Oron. 2001. “Content, Computation, and Externalism.” Mind 110: 369-400.

Shapiro, Lawrence A. “A Clearer Vision.” Philosophy of Science 64: 131-53.

 

 

Notes

[1] This apparent fact is bemoaned by Dewhurst (2014), Fresco and Primiero (2013: 254), and Piccinini (2015b: 14, 48).

[2] It won’t do to insist that the term miscomputation should be used to mean malfunction of a computing system.  The labels we use matter only so much.  As long as what I call miscomputation is possible, an adequate account of computation needs to explain how it is possible.

[3] Computational individualists include Cummins (1989: 81) and Egan (1992; 1995).

[4] Computational externalists include Burge (1986: 28-9), Bontly (1998: 569-70), Peacocke (1999), Piccinini (2015b: 42-3), and Shapiro (1997: 141). Even Segal (1991: 492-3) is an externalist insofar as he allows the computational structure of a system to depend on its role in a larger system.

[5] Egan (1992; 1995) defends the conjunction of computational individualism and content externalism.

[6] I clarify the relation between dispositions and functions in §2.3.

[7] For additional constraints on digital computation, see Piccinini (2007: 508-514, 2015b: 125-34).

[8] Chalmers’ organizational independence is similar to Piccinini’s medium-independence.  While Chalmers requires this sort of independence for many mental properties, he is committed to pancomputationalism because he doesn’t require it of computational ones (contrast Chalmers discussion of digestion on pgs 332 and 337-8).

[9] To determine these groupings, Shagrir (2001: 382-3) argues that we must appeal to semantically individuated tasks.  Given our assumption of content externalism, Shagrir’s view entails computational externalism.  But the rest of the paragraph shows that we don’t need to appeal to semantically individuated tasks after all.

[10] Parts tend to perform simpler computations, because systems are generally constructed so that their computing operations are somewhat efficient.  It’s possible for the parts to perform more complex computations than the whole.

[11] You’ll save yourself some time if you take my word for it.  For the most scrupulous readers, I describe such a device.  Let S3 be the composition of S1 and S2.  Suppose S1 is differentially sensitive to three voltage ranges: <2.5v, 2.5v-5v, and >5v.  Let S1’s outputs be S2’s inputs, where S2 is only bi-stable, and thus only differentially sensitive to two voltage ranges, 0-5v and >5v.  Whenever S2’s input is 0-5v, it outputs 0-5v.  Whenever its input is >5v, it outputs >5v.  In such a case, a system S3, which is solely composed of S1 and S2 will only be differentially sensitive to two ranges, 0-5v and >5v.  (Assuming that Digital Perseverance is false, S3 is an and-gate.  As typical with mechanistic explanation, we can explain how S3 has such a computational capacity by breaking S3 into its component parts.  It has the and-gate capacity because it has a component, S1, which performs an “averaging task of sorts” (cf. Piccinini 2008: 228) and then having a component, S2, that performs the identity function on S1’s outputs.)

[12] Rival accounts of computation can disagree about what counts as the computational input to the system, i.e., what counts as a digit, but these differences will reduce to disagreements over the computational structure of the system.  This is what happened in §1 when I very briefly discussed Shagrir’s challenge for individuating digits.

[13] See Allen (2003) for some a survey of the standard accounts for biological organisms.

[14] Bontly (1998: 569-70) and Piccinini (2015b: 43) claim that teleology determines computational structure, so they opt for either A or B.

[15] Why think that S3 does doesn’t compute at all?  Computation is a certain kind of transition between inputs and outputs; no outputs, no computation.  S3 makes as if to compute but fails to compute.  Consider an analogy.  Suppose that, as I’m winding up to throw the ball to my kid, I’m distracted by the alien warships that have descended into view.  While my arm moves forward in a lazy throwing motion, I never let go of the ball.  I didn’t throw the ball.  I made as if to throw.  Just as throwing the ball requires the ball to leave my hand, computing requires the system to output a (complete) string of digits.

[16] You can replace teleology with your preferred account of computational norms.  It won’t matter.

[17] Those who appeal to functional explanations often insist that properly functioning systems have a kind of priority over malfunctioning systems (e.g., Piccinini 2015b: 109-10).  Can this priority provide a non-ad hoc account of miscomputation?  Not by itself.  First, the resulting view must explain how it is possible for a damaged component to compute a function that it is not supposed to compute.  By itself, the priority of proper function doesn’t tell us how this is possible.  Second, the priority of teleologically or normatively loaded explanations does not guarantee that a single kind of explanation, computational explanation, is wide in properly functioning systems and narrow in malfunctioning systems.  There might be two different kinds of explanations and computational explanation is the one that is narrow and lacks priority.

[18] My response to Shagrir in §1 is also relevant.  Between my response to Shagrir above and my response to Rescorla below, this paper responds to two very different arguments that content is needed to account for the computational structure of the system.

[19] The context makes it clear that Rescorla isn’t denying that formal manipulation is relevant to computational transitions; what he’s denying is that formal manipulation is all that’s relevant.

[20] This paper is in memory of Jonathan McKeown-Green (1963-2015), who helped me think through the inchoate ideas that ultimately led to this paper.  Matt Haug, Gualtiero Piccinini, Kevin Sharpe, and an anonymous referee provided helpful comments on earlier drafts.  Those earlier drafts were written thanks to a William and Mary Summer Research Grant.  I owe these people and William and Mary my gratitude.

6 thoughts on “Miscomputing Individualistically: It’s the Only Way to Do It”

  1. Marcin Miłkowski (Institute of Philosophy and Sociology, Polish Academy of Sciences, Warsaw) — Invited Commenter says:

    Marcin Miłkowski
    Institute of Philosophy and Sociology, Polish Academy of Sciences, Warsaw

    Do we need supervenience to talk of computation?

    In my comments, I will refrain myself to one crucial issue in an otherwise very dense and interesting paper by Chris Tucker. He is definitely right in stressing the importance of giving a proper account of miscomputation, and not only of malfunction of computational systems. But to defend his own account, he relies on computational individualism. Computational individualism, as he states it, “is the claim that the computations a system performs are individuated narrowly. In other words, if we hold the laws of nature fixed, then a system’s computational structure supervenes on its physical structure.”

    Although supervenience claims seem to crop up over and over again in contemporary debates over mechanistic explanation and interventionist accounts of causation, I think the appeals to this notion are mostly based on a deep confusion. It’s time to give it up: one cannot express interesting claims about the organization of mechanism by using the extremely crude vocabulary of supervenience. Now, global supervenience, is almost toothless, as nobody in their right mind would deny that facts about computations are dependent on the facts about the whole physical universe. But local supervenience is much more tricky and fails almost universally.

    To bring this point closer to home, let’s consider a very simple example of a logic gate. If we interpret high voltage as “1”, and low voltage as “0”, then the gate under consideration implements an AND gate. But if we reverse the interpretation, then the same gate is an OR gate (the problems with logic gates are expounded more fully by Fresco, see for example (Fresco 2014)). Apparently, we keep all facts about the physical constitution of the circuit fixed but the computation changes. Changes depending on what? The defender of the semantic account of computation would probably say: “depending on your interpretation, of course!” Now, I would be more cautious. If this gate is a part of a huge circuit, then there might be important facts about the interaction with other gates in the silicon chip that may hint us that the gate is actually an OR gate, for example. Simply, under this mapping, the gate would perform a sensible operation for other gates, and explanation of the whole circuit would proceed more easily. We might discern that there’s a pattern of connections that corresponds to a combination of a NOT gate and an OR gate, which then seems to fit another pattern in which material implication could figure, etc.

    The lesson to be learnt from this simple example is that the claim of the local supervenience is not really reasonable; it’s too easy to find counterexamples. In general, the mechanistic explanations rely on looking around the mechanism, or on finding the appropriate account of the mechanism interaction with its milieu, which may comprise other mechanisms (Bechtel 2009).

    But there’s some truth in the claim used by Tucker to defend his computational individualism. It’s true that voltages of logic gates need not be the same in the same large mechanism; in other words, Digital Perseverance, understood by Tucker as the claim that the computational structure supervenes on the structure of the whole mechanism. Suppose S is a computing component of some larger system S*. Now, DP claims that “necessarily, feature F counts as a distinct digit for S only if F counts as a distinct digit for S*.” Tucker rightly notices that there are systems in which DP is violated. But note however that Digital Perseverance (DP) is not a negation of computational individualism. It does not use the notion of supervenience at all, and even if Tucker tries to spell it out this way, he fails. As he says: “For, given Digital Perseverance, the computational structure of a part can’t supervene on its physical structure, though it might supervene on the physical structure of the whole mechanism of which it is a part”. If something might supervene, then it also might not. But supervenience claims are usually understood as necessarily true (of course, one might try to salvage Tucker’s point by adding that it might necessarily supervene, which, under S5 logic, would mean that it necessarily supervences but this kind of move seems to be a huge stretch). So the modality of the claim is wrong.

    Anyway, for the sake of argument, let’s assume there’s a way to fix this problem and Tucker is right that DP could be framed in terms of supervenience. But then DP would be still a type of local supervenience. The only difference with computational individualism is what we treat as the supervenience base, or how broad the base is understood. But the negation of the claim of local supervenience L1 is not yet another claim of local supervenience, say, L2. It’s just the claim that makes it logically possible to violate L1. And note the most important part: it might be violated just in case when digits don’t supervene locally at all. This would be exactly my position: it may make little sense to try to delineate a given part of a computational mechanism just to draw boundaries of the supervenience base WHEN ignoring the overall context of the mechanism.

    Of course, if I’m right, then there are fewer constraints in hypothesizing a given computational structure in a mechanism than Tucker and Piccinini seem to admit. Fewer constraints means that a broader space of hypotheses has to be traversed to find a true one. In other words, my claim comes at a price. But at least it does not exclude some hypotheses a priori without really considering that there are no natural joints in nature that would disallow to carve logic gate states this way, and not the other.

    References
    Bechtel, William. 2009. „Looking down, around, and up: Mechanistic explanation in psychology”. Philosophical Psychology 22 (5): 543–64. doi:10.1080/09515080903238948.
    Fresco, Nir. 2014. „Objective Computation Versus Subjective Computation”. Erkenntnis 80 (5). Springer Netherlands: 1031–53. doi:10.1007/s10670-014-9696-8.

    1. Many thanks to Miłkowski. Most importantly, he’s pointed me to some relevant research that I need to address in the next draft, but he’s also identified some areas where I could have been clearer.

      Individualism vs Externalism Simplified

      Miłkowski raises worries about my usage of “supervenience,” computational individualism, and their relation. So I’ll begin by presenting a simple way of distinguishing between individualism and externalism that doesn’t rely on the term “supervenience”. Suppose we each make a list of what we think can make a difference to the computational structure of a system. The divide between individualists and externalists is over what makes the list. Individualists usually have fewer things on their list. For the purposes of this paper, you are a computational externalist if any of the following things make it onto your list: the system’s role in a larger system; the system’s environment (including the intentions of a totally distinct system or person); the system’s causal history; or the system’s content. As an individualist, I’m claiming that no such thing can make a difference to the computational structure of a system.

      Digital Perserverance and Individualism

      Contra Miłkowski, this simple way to distinguish between individualism and externalism is enough to show that individualism is incompatible with Digital Perserverance. If individualism is true, then system’s role in a larger system can’t affect its computational structure. According to Digital Perserverance it can: whether a given system computes over two digits or three can be affected by the number of digits the whole system computes over. Thus, as an individualist, I’m committed to rejecting Digital Perserverance.

      In his discussion of Digital Perserverance, Miłkowski worries that I’ve committed myself the contingency of supervenience relations when I say: “given Digital Perseverance, the computational structure of a part can’t supervene on its physical structure, though it might supervene on the physical structure of the whole mechanism of which it is a part.” What I meant by this sentence can be expressed more perspicuously but more tediously as follows:
      Digital Perserverance is incompatible with the claim that a system’s computational structure supervenes on its physical structure; however, Digital Perserverance is compatible with the claim, endorsed by Segal, that a system’s computational structure supervenes on the physical properties of the whole system of which it is a part (where we allow that the whole can be a non-proper part of itself).
      While Segal rejects both individualism as I construe it and the common claim that the environment of a system can affect a system’s computational structure. Hence, while Segal’s list of things that can make a difference to a system’s computational structure is larger than mine (because only he allows a larger system to make a difference), his list is smaller than most computational externalists.

      Miłkowski’s Objections to Computational Individualism

      Miłkowski notes that we can treat something as an AND gate if we assign “0” and “1” to certain voltages, when we also could treat it as an OR gate simply by swapping the assignments of “0” and “1”. He concludes, “Apparently, we keep all facts about the physical constitution fixed but the computation changes.” I deny, however, that the computational structure of the system changes. These different assignments are mere notational variants of the same computation. Fresco, in the paper cited by Miłkowski, agrees with my assessment (Fresco 2014: 1050-5) and so does Shagrir (2001: 273).

      A different worry is grounded in Miłkowski’s observation that “In general, the mechanistic explanations rely on looking around the mechanism, or on finding the appropriate account of the mechanism interaction with its milieu, which may comprise other mechanisms.” By itself, this observation is just epistemological: it concerns how we figure out what something’s mechanistic structure is. It doesn’t follow that, at any (much less every) level of abstraction, mechanistic structure is partly individuated by things around the mechanism, etc. Analogously, I figure out who is in the room by looking around the room and identifying faces. Nothing follows concerning the individuation of personal identity. I agree that at some levels of abstraction—such as the content-laden intentional, folk psychological level—a system’s behavior is partly individuated by the system’s environment or its causal history or the like. Yet, in this paper, I argue that such “externalism” does not apply to the computational level of description. If you make the computational level of description externalist, then you make explaining miscomputation harder than it has to be.

  2. Christ Tucker’s paper contains an important contribution to the literature on functions and malfunctions. Namely, it raises the important and hitherto underappreciated question of how a system’s “actual function” (what the system does) relates to its teleological function (what the system ought to do). The paper also defends an individualistic account of miscomputation, which could become a valuable proposal.

    Everyone realizes that the notion of malfunction introduces a potential gap between the function a system should perform (teleological function) and the “function” the system performs (which Tucker calls “actual function”). If a system’s actual function is different from its teleological function, then the system malfunctions. This much is uncontroversial.

    The key insight of this paper is to raise the question of how to individuate a system’s actual function vis a vis its teleological function. Most accounts of teleological function are anti-individualistic in the sense that, according to them, teleological functions are wide—they supervene on more than what’s within the narrow physical boundaries of the system. What about the actual function? Is that a wide or a narrow function? More generally, how is actual function individuated relative to teleological function? As far as I can tell this paper is the first explicit discussion of these interesting questions. Because of this, this paper might become an important reference in the literature on functions.

    The whole paper is framed as a discussion of computational functions and miscomputation (rather than functions and malfunctions in general), but the questions at the heart of the paper are more general. It may be worth stressing this generality in the paper. Or perhaps Tucker should write a separate paper raising these questions for the theory of functions in general. From now on, I will follow Tucker in focusing on the specific case of computational functions and malfunctions (i.e., miscomputations).

    Consider a particular device D. Suppose D’s teleological function is to compute the AND function, meaning that D ought to compute the AND function. Identifying D’s teleological function presupposes that you have a way of grouping certain subsets of D’s physical microstates into macrostates labeled ones and zeroes, which are the inputs and outputs of the AND function, and a corresponding way of grouping the physical microstate transitions into macrostate transitions from input pairs of ones and zeroes to output ones and zeroes (and of specifying during which time intervals such groupings are functionally relevant). The author agrees with the mainstream view that D’s teleological function is individuated widely, so that the groupings in question are defined by reference to something external to the D (such as the way D is supposed to contribute to the goals of its makers and users).

    Now suppose that you study D’s behavior empirically to investigate the actual function it computes. You find that D produces various outputs in response to various inputs. If your study is comprehensive, D’s states and state transitions are physical microstates and state transitions. There is an indefinite number of ways that various subsets of D’s physical microstates can be grouped into macrostates, and various subsets of its physical microstate transitions can be grouped into macrostate transitions. Which groupings count as the computational function actually performed by D (as opposed to its teleological function)? This is the question at the core of this paper.

    The answer that is mostly implicit in my work (Piccinini 2015) is that the computational function actually performed by D is defined in a way that is parasitic on D’s teleological function, in the following sense. To determine D’s actual function, D’s microstates must be grouped the way its teleological function demands, and when that is done the macrostate transitions that actually occur between members of such groupings constitute the function actually computed by D. If the function actually computed by D is different from its teleological function, a miscomputation occurs.

    Given the above account, which is mostly implicit in my work, actual functions are individuated in the same way as teleological functions. If teleological functions are individuated widely (as they are according to me and Tucker), actual functions are also individuated widely. Tucker’s paper does not make all of the above explicit, and it would be valuable if it did so. In fact, if all that the present paper did were raise the questions I listed above and make explicit that various philosophers, myself included, imply this specific answer to those questions, the paper would already be worth publishing.

    (What I just made explicit is a way of cashing out what Tucker calls option C. Tucker misinterprets me on this point, suggesting that Piccinini “committed himself to option A,” namely, to actual functions being always identical to teleological functions. Unless Tucker offers compelling textual evidence of such an uncharitable interpretation, he should withdraw this claim. But it would be fair to point out that I am not explicit on the relation between teleological function individuation and actual function individuation. Incidentally, Options A and B appear to be absurd, so they don’t seem worth discussing explicitly. The value of the paper lies elsewhere.)

    In addition, Tucker argues that actual functions need not be individuated in the same way that teleological functions are. Specifically, he argues that actual functions may be individuated narrowly while teleological functions are individuated widely. This is an interesting suggestion, especially given that in the old days philosophers of computation favored narrow individuation of computational function, but such a view was mostly abandoned in favor of wide individuation (prominent exception: Frances Egan; although the extent to which her view of functional individuation is really narrow in the relevant sense is questionable; see my discussion of her view in Piccinini 2015).

    What Tucker doesn’t say is how his proposed narrow individuation of actual function works. He just posits it without explaining how it can be accomplished in practice. Among the indefinitely many groupings of subsets of physical microstates into computational states that are possible, which groupings capture the actual computational function of the system? If Tucker wants to offer a serious proposal to the effect that actual function can be individuated narrowly (without piggybacking on the teleological function of the system, which, Tucker agrees, is a wide function), he should answer this question without helping himself to anything outside the system. I’m not aware of any principled (i.e., non-arbitrary, non-ad-hoc) way of doing this. If Tucker has a principled answer to this question, his proposal would be worth serious consideration indeed.


    Some minor comments on specific statements by Tucker:

    “Piccinini claims that “if the [computing] mechanism malfunctions, a miscomputation occurs.””

    The surrounding context makes clear that what I mean is that when a computing mechanism malfunctions with respect to its computing function, a miscomputation occurs. Fresco and Primiero probably mean the same thing. My big shtick is that only some physical systems have the function of computing, and only some of their components have computing functions… So what I say here is consistent with “miscomputation being a special kind of malfunction,” as Tucker says. However, I also argue that only some kinds of miscomputation are the result of malfunction.

    “If the battery dies, a system may fail to compute what it is supposed to compute.”

    If the battery dies, this is not a malfunction. A malfunction occurs if the battery breaks.

    “An account of miscomputation must have at least three parts: (i) an account of computational behavior; (ii) an account of computational norms; and (iii) an account of how it’s possible for a system to compute something it isn’t supposed to be computing… When Piccinini (2015: 148-50) “explains” how his mechanistic theory of computation accounts for miscomputation, he fails to explicitly address any of (i)-(iii).”

    This is too strong and uncharitable. I give an explicit account of teleological functions and the sense in which they are normative, so I give an account of the relevant norms. I also give an explicit definition of miscomputation, which explains under what conditions miscomputation occurs. What I am not explicit about is how the individuation of the function actually computed works and depends on the teleological function of the system (as I pointed out above).

    “Piccinini (2015: 43) claim[s] that teleology determines computational structure, so [he] opt[s] for either A or B”.

    More precisely, I (1915: 43) claim that functional properties are individuated widely. This talk of “teleology determining computational structure” is an awkward phrase coined by Tucker. It implies that there is some sort of dependence between teleology and computational structure, which is a strange and unclear thing to imply. I recommend finding a more perspicuous way to put the point.

    References
    G. Piccinini (2015). Physical Computation: A Mechanistic Account. Oxford: OUP.

    1. Many thanks to Gualtiero. The next draft will need substantial changes to fully address his comments, especially insofar as I’ll need to add discussion of Parasitic Individuation.

      Have I offered a serious individualist proposal?
      Piccinini would like to see my computational individualist proposal fleshed out in more detail; otherwise it doesn’t constitute a serious alternative. In reply, I say my theory is as well fleshed out as Piccinini’s. After all, my theory just is Piccinini’s minus the teleology. Since I don’t require teleology and he does, I’m committed to more things computing than he is. I don’t regard this as a problem. As long as Digital Perseverance is false, there are no clear disadvantages to dropping the teleology. Indeed, given that our theories are nearly identical, it’s unlikely that Piccinini can find an advantage that his theory enjoys over mine. Yet my theory enjoys one big advantage: it can easily explain miscomputation.

      Parasitic Individuation Explained
      When I wrote this draft, I wasn’t sure how Piccinini would want to individuate the computational structure of malfunctioning systems. I didn’t find his text clear on this point. Piccinini offered something like the following proposal in his comments and claimed that it is implicit in his book:
      Parasitic Individuation: The system’s microstates (e.g., electrical charges) must be grouped together into macrostates, or digits, as demanded by proper function. If the system’s current behavior involves inputs and outputs that count as digits, then the system’s actual computation is given by the digits actually inputted and outputted.
      To see the initial appeal of Parasitic Individuation, consider an illustration. Let “pf(m) = n” represent the computation that system S should perform. Let “af(m) = n” represent the computation actually performed by S. Suppose that, in the current circumstances, pf(0,1) = 1, 1, 0, where proper function individuates microstates into two digits, (≤5v) and 1 (>5v). If S functions properly, then af(0,1) will likewise equal 1, 1, 0. Yet suppose that S is damaged in a way such that S receives the inputs 3v and 7v and outputs 3v, 3v, 3v. Parasitic Individuation tells us that the computation performed by S is af(0,1) = 0, 0, 0. Due to malfunction, we get one value for pf(0,1) and a distinct value for af(0,1). Thus, Parasitic Individuation accounts for the kind of miscomputation we are after, the kind in which a system implements one computation when it should have implemented a distinct computation.
      While Parasitic Individuation has some appeal, I doubt it’s implicit in Piccinini’s book. Consider, for example, what he says on page 128: “During any time interval when a mechanism does not perform its function…microstates do not belong to any digit.” If microstates of malfunctioning systems don’t belong to any digit, then in what sense do malfunctioning systems compute? And if they don’t compute, then they don’t miscompute either. In any event, this passage on 128 seems incompatible with Parasitic Individuation.
      Of course, it’s not important whether Parasitic Individuation is implicit in Piccinini’s book; what’s important is whether it provides a satisfactory externalist account of miscomputation. It doesn’t.

      Parasitic Individuation Rejected
      A serious problem is that Parasitic Individuation overcounts computations and miscomputations. Individualist and externalists generally agree that only certain transitions between inputs and outputs count as computations. A mapping from inputs to outputs does not count as a computation if:
      a. As the system is currently composed, variations in the inputs are causally irrelevant to variations in the outputs ;
      b. As the system is currently composed, inputs cause outputs (e.g., no input charge, no output charge), but it is entirely random which output is matched with a given input;
      A malfunctioning computing device counts as computing, according to Parasitic Individuation, as long as the inputs and outputs are computational states in properly functioning systems. In the imagined case above, the damaged system receives input states (3v, 7v) and outputs states (3v, 3v, 3v) which are computational states in properly functioning systems. That was all it took to show, given Parasitic Individuation, that the imagined system was performing the computation af(0,1) = 0, 0, 0. Yet if the system is damaged so that the input states are causally irrelevant to the output states or the inputs only randomly cause the outputs, it is implausible that the damaged system is genuinely computing. For example, suppose that the mechanism that controls the output system is stuck on 3v, so that the input charges (0,1) do not explain why it outputs what it does (0, 0, 0). The system’s behavior isn’t computational. While the system malfunctions, it isn’t computing, and so it isn’t miscomputing either. To endorse Parasitic Individuation is to overcount both computations and miscomputations.
      At this point, the temptation is to think of ways to refine Parasitic Individuation so that it avoids the overcounting problem. Remember, however, that the game we are playing is comparative. The goal is not merely to make computational externalism work, but to make it to work as well as my individualist account works. The simplicity and flexibility of my account set a high bar. Just pick your favorite account of computational norms and then combine it with my individualist account of computational structure. It’s that easy to account for miscomputation.
      Should I choose to endorse a teleological account of computational norms, I can accommodate the popular idea that properly functioning systems have metaphysical priority over malfunctioning systems. This priority would be classificatory and normative: e.g., a system counts as a system of kind K and is thus subject to the computational norms of K because it bears some relation to (certain) properly functioning systems of kind K.
      Yet, from the individualist’s point of view, it is a mistake to further claim that the priority is also individuative, that is, that the computational structure of properly functioning systems partly individuates the computational structure of malfunctioning systems. This further kind of priority gives us computational externalism, but it makes computational individuation more complicated than it has to be. I treat the computational structure of all systems in exactly the same way: whether a system is functioning properly or improperly, a system’s computational structure is its internally individuated structure (that satisfies the definitive list). That’s simple. By further demanding individuative priority, Parasitic Individuation individuates the computational structure of properly functioning systems one way and the computational structure of malfunctioning systems another. It pays the price of complication just to get the overcounting problem in return. Alternative teleological externalist approaches may fare better, but the goal is not to do better than Parasitic Individuation; it’s to do better than the flexible, straightforward account that I proposed. And to do that, the externalist will need to show that if we pay the price of complication, we get something worth it in return.

      1. Hi Chris, thanks for the paper. I’m finding this exchange between you and Gualtiero really interesting, and I was wondering if you could clarify something for me.

        In response to Gualtiero saying that you don’t explain “how [your] proposed narrow individuation of actual function works”, you say “my theory is as well fleshed out as Piccinini’s. After all, my theory just is Piccinini’s minus the teleology.”

        But isn’t it a live option that Gualtiero’s theory (or rather, the ‘definitive list’ that he gestures at), when the teleology is removed, fails to identify a unique computational structure for a given system? If that’s the case – in particular, if Gualtiero’s-theory-minus-teleology leaves it massively under-determined what a system’s computational structure is, then it looks like his criticism has bite. But, not being too familiar with the debate, I’m not sure if that’s possible.

        1. Hi Luke, I think you are pushing on an important point. Do you think that Chalmers’ account of computation massively underdetermines a system’s computational structure? If so, then from your point of view mine will probably be massively underdetermined too. From my point of view, the problem for my view (if it has one) would not really be about (massive) underdetermination but overgeneration of computations.

          Oversimplifying slightly, Piccinini and I agree that computing devices must have certain narrowly individuated functional profiles in order to even be eligible to compute. Digestion doesn’t compute simply by having a telos; and i can’t make my lego creation a computing system simply by intending that it compute. The difference between Piccinini and I is basically this: I hold that these narrow functional profiles are sufficient for having a computational structure whereas Piccinini also demands that these narrow structures have a telos. Thus, I think we equally well determine which things compute. Actually, he may do worse insofar as he underdetermines which things have a telos. In any event, whenever something lacks a telos but is otherwise an eligible computing system, i will be committed to computations to which Piccinini isn’t committed.

          What’s the problem with being committed to more computations, even computations that are uninteresting and no one would ever care about? Even before the teleology, Piccinini imposes more constraints than Chalmers, and I think Chalmers is right that his view doesn’t trivialize computations. In fact, in some cases, my view may have intuitively better results than Piccinini’s. If two devices are physical duplicates and are undergoing the same physical processes, Piccinini’s view entails that one might be computing and the other not, precisely because only one of them has teleology. I find this mildly counterintuitive (and anyone who has swampman worries about teleological accounts of things is likely to have a similar worry).

          In short, I don’t think the relevant worry is best pressed as a worry about whether computations are underdetermined. I think it’s best pressed as a worry concerning whether my theory delivers intuitive judgments about which things compute. I think the results it delivers are just fine, but I think one could sensibly disagree with me on this point.

Comments are closed.

Tweet
Share
Share