Santiago Arango-Muñoz
G.I. Conocimiento, Filosofía, Ciencia, Historia y Sociedad
Instituto de Filosofía
Universidad de Antioquia
Juan Pablo Bermúdez
Facultad de Ciencias Sociales y Humanas
Universidad Externado de Colombia
Abstract: Many philosophers consider that memory is just a passive information retention and retrieval capacity. Some information and experiences are encoded, stored, and subsequently retrieved in a passive way, without any control or intervention on the subject’s part. In this paper, we will defend an active account of memory according to which remembering is a mental action and not merely a passive event. According to the reconstructive account, memory is an imaginative reconstruction of past experience. A key feature of the reconstructive account is that given the imperfect character of memory outputs, some kind of control is needed. Metacognition is the control of mental processes and dispositions. Drawing from recent work on the normativity of automaticity and automatic control, we distinguish two kinds of metacognitive control: top-down, reflective control, on the one hand, and automatic, intuitive, feeling-based control on the other. Thus, we propose that whenever the mental process of remembering is controlled by means of intuitive or feeling-based metacognitive processes, it is an action.
[expand title=”Target Presentation by Arango-Munoz and Bermudez” elwraptag=”p”]
1. Introduction
There has been a recent shift in our understanding of episodic memory. Traditionally conceived as the capacity to store and retrieve information from our personal past, episodic memory is now thought of as a particular form of a broader cognitive function: that of imagining, or mentally simulating, ego-centric events, whether they belong to the past or the future, whether actual, hypothetical, or counter-factual. Understanding episodic memory as part of this more general capacity for ‘mental time-travel’ allows us to account for why it often produces not a reliable reconstruction of the specific details of the past, but rather a bare-bones reconstruction of past situations that diverge from the remembered events.
In this paper we seek neither to defend this reconstructive conception of episodic memory, nor to argue that it is empirically better grounded than the traditional, preservative account (These tasks have been performed by De Brigard (2014), Michalian (2016), and Schacter & Addis (2007), and we refer the interested reader to these works). There are at least three ways to understand the constructive theory of memory: as a case of a more general capacity for mental time travel (Suddendorf & Corballis, 2007), as episodic hypothetical thought (De Brigard, 2014), and as episodic imagination (Michaelian, 2016), but we need not decide among them here. Rather, we assume the imaginative reconstructive conception (but analogous considerations should apply to the other versions), and ask a further philosophical question: does remembering constitute a mental action? Is the imaginative reconstruction, or mental simulation, involved in episodic memory, an action? If so, what kind of agentive processes and mechanisms are at stake in remembering?
We argue that remembering is indeed a mental action. But in order to argue for this we have to face objections from thinkers who worry that the “ballistic” nature of the production of mental content leaves no room for agency and control in imaginative processes. In particular, Strawson (2003) has argued that although there are some cases of mental actions, they take place prior to, and ‘set the stage’ for, the emergence of mental content. I may agentively try to remember the first time I read The Myth of Sisyphus, and these efforts to bring a certain mnemonic content to mind may count as mental actions, but the actual imaginative reconstruction of the situation in which I first encountered that text (images of the place, the time, the people around me, etc.) are not under my agentive control. They occur spontaneously, in the sense of ‘involuntarily’; their production mechanisms are outside of my awareness; I could not control them, as demonstrated by the fact that if I intend to not remember the first time I read Camus’ text, I would still end up bringing (at least some of) these mental contents to mind.
This is the objection from mental ballistics: imagination is a ballistic mental process, and given that episodic memory is a kind of imagination, episodic remembering is also a ballistic process that cannot be a mental action.
This essay argues against the mental-ballistics interpretation of reconstructive episodic memory. We claim that the kind of memory that constitutes episodic reconstruction is not a ballistic phenomenon, but is rather an instance of agentive control. To show this, (2) we distinguish between two kinds of agentive control: reflective and automatic. The latter is often taken to be impossible, because automaticity and control are often considered opposites; but we show that there are cases of automatic and controlled mental processes. Then we argue that (3) there are two levels of metacognition that correspond to the reflective and automatic control processes distinguished above, and that automatic, feeling-based metacognition is a control process that can be applied to mental processes like remembering. Lastly, (4) by looking at the evidence for remembering as an imaginative reconstruction, we argue that this process is characterized by a particular form of feeling-based metacognitive control, and that the presence of this control process reveals the agentive nature of episodic memory. Remembering is a mental action, because the imaginative reconstruction processes that constitute it are imperfect, and therefore need some kind of control. Episodic reconstruction is constrained and guided by a process of feeling-based metacognitive control.
2. Mental ballistics, automaticity, and control
The idea of memory as reconstruction seems to imply some kind of activity: remembering requires bringing the multiple traces of the remembered experience together in a mental simulation of the event. Although this looks prima facie like an agential process, there is a difficulty: a number of lines of cognitive science research have shown that most of the mental processes that shape our daily lives take place automatically: they are fast, associative, working-memory-independent, and therefore apparently produced outside of effortful cognitive control (for general overviews see Evans (2010b) and Kahneman (2011)).
Thus, although the reconstructive account suggests that memory is an agentive process, it still has to confront the “threat of automaticity” (Wu, 2013): although remembering is a reconstruction, this reconstruction seems to take place automatically without the subject’s agential intervention. Wayne Wu remarks that “automaticity is what makes decisions about mental agency controversial” (Wu, 2013, p. 244). Most philosophers consider memory to be a more or less automatic capacity. In a recent paper, for example, Andy Clark claims that “ordinary biological memory, for the most part, functions in a kind of automatic, subterranean way” (Clark, 2015). If this is so, then calling ‘remembering’ an action is highly controversial.
The core philosophical issue here is that automatic processes seem to be by their very nature non-agential. After all, we tend to call ‘automatic’ those processes that are invariably triggered by the same kind of stimuli and respond to them in systematically the same way. They are rigid, difficult or impossible to correct, and insensitive to novel evidence. In a word, automatic mental processes seem to be ballistic: they “run to completion once triggered and cannot be aborted in midcourse” (Stanovich, 2004, p. 39). Add to this what Wu calls the “simple connection” between automaticity and control: “automaticity implies the absence of control […] by the subject” (2013, p. 246). This traditional view has been questioned empirically, but it retains a powerful hold over our theoretical intuitions. The simple connection entails that agentive control consists in the deployment of top-down attention and cognitive effort toward the attainment of a goal, and that automatic processes, being fast and effortless, are devoid of agentive control and can participate in it only to the extent that they are subsumed under higher-order, cognitively effortful mental processes.
According to this view, if remembering was to be considered an action, the subject would have to concentrate, pay attention, avoid distracting elements, try mnemonic strategies intentionally, gather cues…, and all this would imply a certain amount of non-automatic attention, consciousness, and mental effort, even if not particularly “strenuous” (Mele, 1997). But even if this happens, even if the preparatory ground-setting steps are cognitively effortful and controlled, the very process of imaginative reconstruction seems to work spontaneously and sub-personally: when I remember what I had for dinner last night, I cannot really try to remember: the memory traces are assembled and presented as if by magic; all I can do at the personal level is try to bring it about that I remember, conjuring the content, and wait for it to arrive (Mele, 2009).[1] Thus, if automatic processes are ballistic, there is nothing particularly agential about them.
2.1. The mental ballistics argument: from imagination to memory
Given this view of automatic processes as ballistic, Strawson concludes that the space for action in the mental is rather small: “the role of genuine action in thought is at best indirect. It is entirely prefatory, it is essentially—merely—catalytic” (2003, p. 236). The mental phenomena that can be properly called actions are those that ‘set the stage’ for the emergence of mental content, but not the content’s emergence itself. Setting myself up for tackling a problem may be a mental action; the actual solution’s appearance in my mind is not. Bringing my attention back to a task after I have been distracted may be a mental action; the subsequent steps involved in completing the task probably are not.
This also applies in the case of imagination. If I ask you to imagine a pink elephant gracefully walking on a rainbow, you can do this immediately and intuitively, in a way that suggests agentive control: you can picture it big or small, you can make it dance, smile or wave its hat. This sense of agentive control over imagination is, however, set in a different light if I ask you not to imagine a pink elephant at all. Now it is harder to make imagination comply: you may try to bring it about that you do not imagine the elephant (by occupying your mind with something else), but you cannot directly try to not imagine the elephant. This reveals that the actual imaginary content production is a rather ballistic and rigid process that takes one input and automatically produces content in a way that is not responsive to the agent’s intention, its structure and content (Perhaps the experience of control in the first case was due to the ease with which the content was produced, rather than with proper intentional control). Thus, the imaginative production of mental content seems not to be a case of mental action. If so, the only agentive part of the process would be the initial mental stage-setting.
Now, if episodic memory is a kind of imagination, would the same argument apply to it? If we ask you to remember what your childhood bedroom looked like, it seems like you can exert swift, even effortless control over this process, focusing on different aspects of the room (the floor, the positions of the bed and other furniture, the lighting…). But if we ask you not to remember your childhood bedroom, the recollection process becomes harder to control. Mnemonic contents, like imaginary ones, seem to be produced automatically after the reception of a relevant stimulus, and therefore, ballistically and outside of the subject’s agentive control.
2.2. The simple-connection reply to the mental ballistics argument
Here is a way to reply to these arguments: content generation may be automatic and ballistic, but it is nevertheless a constitutive part of mental action whenever the automatically-generated content is triggered by, and responsive to, the agent’s occurrent intention. Call this the ‘simple-connection reply’, since it argues for imagination, deliberation, and the like being mental actions, but does so without putting the simple connection (i.e. automatic processes are not controlled) into question.
Briefly, the strategy is as follows (Wu, 2013): mental action requires the production of a specific mental content out of many possible ones (in trying to remember my childhood bedroom, I could end up generating images of my current bedroom, other rooms of my childhood home, someone else’s childhood bedroom, and so on); and so mental action requires selecting the right content to be generated. Thus, a certain mental process counts as an action if the produced mental content corresponds to the intention’s representational content, since this implies that the agent has selected the right content, i.e. has directed her attention to the proper content. This can occur only if the agent’s occurrent intention plays a top-down causal role in directing attention toward the right content. And to the extent that her intention plays this causal role, this process is an instance of agentive control, regardless of whether some of its sub-processes are automatic and ballistic.
We think, however, that this simple-connection strategy falls short of fully answering the problem of mental ballistics, for three reasons. First, there are many cases of controlled remembering where there’s no apparent intention beforehand. For example, I might have the intention to go back home to retrieve my credit card. But then, as I am getting ready to go, I suddenly remember that the card is maxed out; so I revise my intention to go back home. In this case, the mnemonic process was not structured top-down by my intention: the memory’s content is not a proper solution to a selection problem established by my intention. And yet, the mnemonic process displays some kind of agentive control in bringing to mind relevant information—so relevant, indeed, that it led me to revising my intention. This kind of mnemonic agentive control cannot be explained top-down.
Second, although we agree that intentions often play this top-down role, the account so far does not explain how intentions do precisely that. How does intention’s capacity to coordinate and structure automatic processes work? In other words, what makes automatic processes—otherwise blind, ballistic and unintelligent—responsive to the intention’s content, and able to yield adequate results? Although the top-down causal role of intentions should certainly be a part of the story of mental agency, there is another part of it that is missing, i.e. the part that makes automatic mental content-generation processes susceptible to being recruited by higher-order processes.
Third, if we have no story to tell about the agent’s role in guiding her automatic processes, then we must still accept Strawson’s view that the agent generates an intention, holds it in mind, and then simply waits for the automatic processes—over which she has no control—to generate the relevant content. This fails to agree with our phenomenology of mental action: not only do we experience that we can bring the relevant mnemonic content to mind; we also feel like remembering is sometimes easier, sometimes harder; we feel that a given memory is more or less accurate; sometimes we feel that we can remember something if we try harder, whereas other times we simply know that we will not be able to remember, no matter how hard we try. All of these phenomena suggest that there is more to agentive mental control than simply holding an intention in mind and waiting for the content. They suggest that we control not only the top-down initiation of remembering, but also (aspects of) the bottom-up production of mental content.
If this is right, and if content production is an automatic process, then there may be a more thorough reply to the mental-ballistics argument than the simple-connection account. And we think that we should let go of the simple connection because there is empirical evidence that some automatic mental processes are also controlled.
2.3. Can automatic processes be controlled?
Psychological research has traditionally considered the concepts of automaticity and control as opposites.[2] One of the strongest traditional pieces of evidence for the uncontrollability of automatic processes comes from Stroop-type effects, where unattended dimensions of a stimulus interfere with the attention-demanding task that subjects attempt to perform (MacLeod, 1991; Stroop, 1935). Findings like these suggest that automatic processes are ballistic: the relevant input triggers them almost invariably, and, once activated, they run rigidly to completion. If automatic processes are ballistic, they are not in themselves controlled or controllable.
We present now some evidence that challenges the ballistic interpretation, showing that some automatic processes do not respond invariably to the same stimulus, or run rigidly to completion once triggered, but rather display contextual and normative sensitivity.
Automatic processes and context sensitivity
The traditional view that automatic processes are reflex-like, activated by the mere presence of a given input, carries its influence to this day (Bargh, Chen, & Burrows, 1996; Gendler, 2008a).[3] And yet, no sensible account, no matter how mechanistic, could deny the power of context. Stroop effects themselves have been found to be modulated by contextual features, like how many dissonantly coloured characters each word contains, the participant’s direction of attention, and the goals of the task. This suggests that the context that triggers the activation of a given automatic process can include not only external features of the environment, but also the agent’s own cognitive dispositional and occurrent states. In fact, there is ample evidence that, contrary to the ballistic interpretation, automatic process activation is affected by “where the current focus of conscious attention is, what the individual was recently thinking, or what the individual’s current intentions or goals are” (Bargh, 1997, p. 3).[4]
The conditionality of automaticity is so broad in scope that conceiving automatic processes as ballistic is a misdescription. If reflexes are indeed triggered by the mere presence of the relevant stimulus, then this is a way in which (at least some) automatic processes are unlike reflexes, since in many cases there is nothing like a clearly identifiable stimulus whose mere presence almost invariably activates the same automatic process. The latter, on the contrary, are sensitive not only to a specific triggering condition, but also to the agent’s motivational states and occurrent goals.
Automatic error detection and correction
Several researchers have proposed that automaticity’s core trait is its being unstoppable once initiated. This seems phenomenologically correct: when we see a familiar word we cannot help but read it, and when we see a person’s face we cannot but recognize it as such.[5]
It thus seems that automatic processes are recalcitrant to available evidence, so that their behavioural output cannot be cancelled or modified even when we have clear evidence for its inappropriateness. Gendler (2008a, 2008b) mentions examples like the fear that even wise persons feel when hanging from a precipice despite being certain that the cage they are suspended in is completely safe, and the unavoidable disgust caused by the prospect of eating delicious, yet feces-shaped, chocolate fudge. Such epistemic insensitivity is yet another reason to consider automatic processes uncontrollable, since they cannot be corrected mid-performance on the basis of updated information.
Yet phenomenology also suggests that automatic processes are often self-correcting. When you over-squeeze a plastic cup, you immediately readjust the strength applied by your hand. As you walk, run, skate, cycle, etc., you automatically adjust your posture on the basis of visual and vestibular cues, in ways so complex that explicit calculation would be unable to specify, relying instead on automatic processes acquired through practice. In fact, automatic error-correction is so fast and efficient that oftentimes we are completely unaware of its occurrence. Koch & Crick (2001, p. 893) discuss a study in which participants must move their eyes and fingers rapidly toward an appearing light at their visual field’s periphery. They reliably do this even if the light moves a bit to the left or right as their eyes move towards it, and interestingly they do not report the light having moved.
Thus, automatic processes can display an ability to adapt to enviromnental changes, to detect and resolve tensions present within the dynamic stream of associative activity (Brownstein & Madva, 2012; Rietveld, 2008). You probably just performed an automatic error correction in reading this paragraph, by reading ‘environmental’ instead of ‘enviromnental’.
This automatic normativity is not proper to bodily action: it is also present in mental action. In Walsh and Anderson’s (2009) paradigm, subjects were confronted with multiplication problems which they could solve using one of two strategies: mentally calculating or using a calculator machine. In selecting the strategy subjects had also to consider a delay of the calculator machine (4 seconds). There were three types of problems: easy, intermediate, and difficult; and two conditions: with calculator delay and without delay. The rate time, the accuracy and the cursor movement from the starting place to the calculator or to the answering box were recorded. Participants quickly initiated a movement corresponding to an initially favored strategy, and then decided whether to complete the problem using that strategy (or shift to another) by conducting a more thorough evaluation while moving the cursor from one place of the screen to another. They often redirected the initial movement of the cursor from one strategy to the other showing “imperfect sensitivity to the current problem”; but then the “commitment to a specific strategy, which occurred later, reflected nearly perfect sensitivity to the profitability of mental and calculator solutions” (Walsh & Anderson, 2009, p. 345). Participants showed an adaptive behavior by performing mental calculations less frequently as the difficulty increased, and more frequently in difficult cases when the calculator was delayed. Strategy selections were extremely fast depended on the interaction between problem difficulty and calculator responsiveness.
Moreover, there’s also fast automatic error detection in mental action. In a recent study, Fernández et al. (2016) designed an experiment to test subjects’ awareness of their errors in fast mental calculation. Participants were presented with a triplet and they had to estimate whether the number in the middle was the arithmetic mean of the two other numbers (e.g., 2 4 8) by a Yes/No answer. Participants were instructed to press the Yes/No buttons as fast as possible and then report whether they had a feeling of error as fast as possible using again the Yes/No buttons. Time pressure was exerted to prevent reflective control and analytical thought. Interestingly, the feeling of error reports were strongly correlated with actual arithmetic errors; in other words, subjects reported having a feeling of error mainly when they had actually committed an error. Additionally, the experimenters tested participants’ confidence in their answers when they did not report feelings of error. Surprisingly, in these cases participants reported less confidence for wrong answers than for right ones, suggesting that in cases where subjects did not have a conscious feeling they still had an implicit awareness of their errors.
So we have automatic error-detection systems, which sometimes produce an explicit feeling of error. Add to this that we are able to not only detect, but also correct errors automatically. In a study that compared people’s abilities to correct, report, and recall errors (Rabbitt, 2002), participants were instructed to look at a screen split into four squares, and press the corresponding button when a dot appeared in each square. Group 1 was instructed to immediately correct the mistakes they made during the task; group 2 was instructed to press a fifth button every time they had a feeling of error; group 3 was randomly interrupted and asked whether they remembered having made a mistake in their last three responses; and group 4 was told to simply ignore all errors. The stimulus’ duration (the Response Signal Interval, or RSI) varied randomly throughout the task between 150ms and 1s.
The first relevant finding was that for all participants the response that followed a mistake was slower than the previous ones. This slowing occurred even when they were unable to report or recall the errors. Further, participants in group 1 were remarkably fast and accurate in correcting their errors: throughout all RSI’s, errors were on average faster than correct responses, and error corrections were on average even faster than errors. Additionally, participants displayed much more accuracy in error correction than in error report or error recall. Across all conditions, it took participants much longer to report an error than to correct it, and failed to report errors much more often than they failed to correct them. Given that error correction can occur notably quickly (as quickly as 40ms after the error is performed (Rabbitt, 1966a, 1966b)), error correction cannot depend on reflective recognition and rectification processes; it must therefore be automatic. Crucially, these automatic error corrections always increased response accuracy, and never turned a right response into a wrong one (Rabbitt, 2002, p. 1082).
Thus, automatic processes (both mental and bodily) can be sensitive to error, and able to perform quick and efficient self-correction without the intervention of slower and coarser-grained higher-order processes.[6] More specifically, the aforementioned evidence suggests that we can perform fast and intuitive error-detection and -correction processes that are independent from the effortful and cognitively effortful processing that relies on working memory. Following dual-process accounts of human cognition, we distinguish here between automatic or intuitive processes that can be performed independently from working memory, and reflective processes whose performance requires the use of working memory (Evans, 2010a; Nagel, 2014).[7] Here ‘working memory’ (Baddeley, 2007) refers to the set of higher-order cognitive capacities that allow for the mental manipulation of task-oriented representations.
Thus, there are two different ways of exerting cognitive control over our behaviour: one of them is reflective control—the often slow, effortful top-down control that we exert by recruiting working memory in novel or attention-demanding tasks. The other is intuitive or automatic control—the fast, rather effortless and intuitive control that we exert automatically, without the intervention of working memory.
The cases discussed above reveal that automatic processes are not (at least not necessarily) ballistic, evidence-insensitive phenomena. It thus seems too harsh to say that automatic processes are “unable to keep pace with variation in the world or with norm-world discrepancies” (Gendler, 2008b, p. 570), even if they do behave that way in some cases, or with respect to certain kinds of evidence—particularly when it is in the form of novel propositional knowledge. We should rather agree with Brownstein and Madva in claiming that automatic processes “can be norm-sensitive in virtue of their responsiveness to affective states of disequilibrium. Responsiveness to such affective states is flexible, self-modifying, and […] a genuinely normative phenomenon” (Brownstein & Madva, 2012, p. 428). Those affective states of disequilibrium can, of course, be misguided with respect to the overall available evidence (as in the case of the trembling wise man hanging over a precipice from a perfectly safe cage), but need not be so: affective states of “felt tension” or “directed discontent” can be a part of reliable dynamic on-line error-detection and -correction systems. If we use such mechanisms to control mental action, they deserve the name ‘automatic metacognition’.
3. Automatic metacognition
3.1. Control as the mark of action
Inspired by neuroscientific accounts of motor control (Jeannerod, 2006; Wolpert & Ghahramani, 2000; Wolpert, Ghahramani, & Jordan, 1995), recent theorists of action have centered their definitions of action on processes of control (Mossel, 2005; Pacherie, 2008; Proust, 2005; Shepherd, 2014; Wu, 2016). From this viewpoint, performing an action requires not just consciously intending or trying to do something, but also doing it in a controlled way, exerting control over the production of the bodily or mental events.
This perspective on action nicely dovetails with the data on automatic control summarized so far. In the cited examples, the subject adjusts, corrects, modifies, or simply acts in a given way while exerting an automatic form of control over performance. Even though the subject controls her behavior without a need for reflection, second-order thoughts and/or metarepresentational awareness, her action is sensitive to some normative constraints. Therefore, given the norm-sensitivity of these behaviours, the agent can be considered to be exerting agentive automatic control while performing them.
So, the next question that we should confront is whether there is a similar form of automatic control over mental actions, particularly over remembering.
3.2. Mental action and metacognition
Although the motor control theory works well in the case of bodily actions, we should resist the temptation of claiming that the control of mental action is carried out by the motor system, as some philosophers have suggested (Campbell, 1999; Peacocke, 2007). It is unlikely that you have to rely on motor images about your own body dispositions, as the motor systems does, in order to control your mental performance (Carruthers, 2009a; Proust, 2009). No bodily movement can fulfil a mental action’s satisfaction conditions. For example, your mental attempt to retrieve a telephone number does not correspond to any bodily movement or to any physical event, because you can mentally retrieve the telephone number without moving any muscle or changing anything in the world. Therefore, the direction of fit of remembering is not world-to-mind, as in the case of bodily intentions (Searle, 1983), since it is not necessary that it should involve a change in the world (outside your mind) to be satisfied. This fact is basically explained by the nature of mental actions in contrast with bodily actions: the former aims at producing an epistemic, emotional, attentional or motivational change in the mind, whereas the latter aims at producing a change in the body and/or in the world (Kirsh & Maglio, 1994).
Thus, a first distinction between bodily action and mental action refers to their goal and the kind of control the agent must exert to perform it. Whereas in bodily action we aim at changing the world and use mainly motor control, in mental action we aim at changing the mind and use metacognition. Metacognition is the capacity to monitor and control mental processes and dispositions (Proust, 2013).
3.3. Two levels or types of metacognition
The classic view is that metacognition is thinking about thinking, i.e. forming metarepresentations, or second-order thoughts, about first-order mental states (Flavell, 1979; Nelson & Narens, 1990). Accordingly, metacognition turns the mindreading capacity towards oneself, and deploys mental concepts to produce self-ascriptions (Carruthers, 2009b, 2011). This type of metacognition has been called “theory-based metacognition” (Koriat, 2000), “high-level metacognition” (Arango-Muñoz, 2011), or “system 2 metacognition” (Proust, 2013; Shea et al., 2014) in analogy with system-2 control processes.
In contrast, recent studies have proposed that there is a leaner form of metacognition that doesn’t require consciousness, theory of mind, or mental concepts, and operates implicitly: “Much cognitive control takes place outside system 2” (Shea et al., 2014, p. 188). Metacognition includes the mental capacity to monitor and control mental processes implicitly (Shea et al., 2014), or by means of metacognitive feelings (Arango-Muñoz, 2011, 2014; Proust, 2013).[8] This type of metacognition has been called “experience-based metacognition” (Koriat, 2000) or “system-1 metacognition” (Proust, 2013; Shea et al., 2014). [9]
“System-2 metacognition” is widely accepted, so we will only briefly discuss evidence for “system-1 metacognition”, which comes mainly from three domains: 1) comparative psychology has shown that animals devoid of mindreading capacity and mental concepts pass metacognitive tasks: Rhesus monkeys and bottlenose dolphins are able to monitor and control their perceptual and memory capacities (Hampton, 2001; Smith, 2009; Smith, Beran, Redford, & Washburn, 2006; Smith & Washburn, 2003). 2) Psychologists have found that subjects very often monitor their mental processes based on metacognitive feelings and heuristics, and not necessarily on theoretical information (Koriat, 2000). A growing literature shows the important role of metacognitive feelings in the control of mental processes (Fernández Cruz et al., 2016; Koriat, 2000; Schwartz & Metcalfe, 2010). Finally, 3) there seem to be differences in the neural activity related to mindreading (system-2 metacognition) and system-1 metacognition (Proust, 2012; Schnyer et al., 2004). According to some studies, mindreading is correlated with neural activity in the right temporal-parietal junction, the prefrontal antero-medial cortex, and anterior temporal cortex (Del Cul, Dehaene, Reyes, Bravo, & Slachevsky, 2009; Perner & Aichorn, 2008); whereas system-1 metacognition is correlated with neural activity in the dorsolateral prefrontal cortex, the ventro-medial prefrontal cortex, and the anterior cingulate cortex (Schnyer et al., 2004).
As argued before, when we perform mental actions we exert agentive control through metacognition. So, if remembering is an action, we should exert control over our episodic reconstruction processes by means of one of these two kinds of metacognition. But which one? In what follows we argue that system-1, feeling-based metacognition allows us to exert agentive control over the content-production aspect of episodic memory processes.
4. Feeling-based metacognition and episodic memory
4.1. Episodic memory as the imaginative reconstruction of the personal past
The reconstructive theory of memory claims that memory is not the passive retrieval of stored representations, but rather the active reconstruction of a representation of a past episode: “Remembering is a matter of imagining or simulating the past” (Michaelian, 2016, p. 60). However, it is also worth noting that reconstruction takes place not only during retrieval, but also during the encoding stage of the process, in which memory selects what to retain and how to retain it, often encoding only a gist of the remembered event (Schacter & Addis, 2007).
If remembering is the action of imaginative reconstruction of a past experience, this opens the door for many mnemonic errors such as false recognition (Roediger & McDermott, 1995), boundary extension (Intraub, Bender, & Mangels, 1992), the superportrait phenomenon (Rhodes, 1996), and confabulation (Michaelian, 2011). Misremembering occurs when a reliable mnemonic system produces a false or inaccurate representation of the past. Reconstruction theorists highlight that, given the constructive character of memory, misremembering is a systematic and ordinary occurrence in our daily lives (De Brigard, 2014).
Given the imperfect character of memory outputs, some kind of control is needed to ensure the reliability of memory. This is not the preparatory, stage-setting control of mental action discussed above (trying to bring it about that I remember X); rather, what is required here is control over the automatic imaginative content-production processes and its reliability (trying to remember correctly). Metacognition, the control of mental processes and dispositions, achieves this largely by means of metacognitive feelings.
4.2. How metacognitive feelings guide episodic reconstruction
As we said earlier, memory has a rich phenomenology: we experience not only that we can bring the relevant mnemonic content to mind; we also feel like remembering is sometimes easier, sometimes harder; we feel that a given memory is more or less accurate; we feel that we will be able to remember something if we try harder (e.g., in the tip-of-the-tongue phenomenon), or we simply know that we will not be able to remember, no matter how hard we tried; sometimes we feel that we are forgetting something we should remember; and so on. All these phenomena are metacognitive experiences that convey some information about the way the episodic reconstruction process is going, whether it is running smoothly or it has found some obstacles, and how serious these obstacles are. As mentioned above, these phenomena suggest that there is more to episodic memory control than simply holding an intention in mind and waiting for the right content to come. On this respect, Souchay et al. remark that “presumably on-line feelings and thoughts generated during retrieval by the object level are monitored by the meta-level, leading to the implementation of mnemonic strategies, termination of search, and so on” (2013, p. 1). These phenomena suggest that we not only have control over the global act of remembering, but also have some level of control over the reliability of the mental content produced.
The feeling of knowing (FOK) is one of the most interesting and puzzling experiences related to memory. Sometimes, when you are faced with a memory question, even before trying to retrieve the answer, you already feel that you know it, that you will be able to reconstruct the aimed information. In the domain of semantic memory, it has been shown that FOKs play a central role in deciding whether to remember a piece of information or try another strategy to retrieve the information (Arango-Muñoz, 2013; Paynter, Reder, & Kieffaber, 2009; Reder, 1987). Although FOK has been less studied in the domain of episodic memory,[10] it seems likely that this experience provides the subjects with a sense of whether they would be able to reconstruct a piece of information or not. When asked whether she will be able to remember her school graduation, the feeling of knowing will motivate a positive answer and a reconstruction attempt.
Subjects also feel that remembering is sometimes easier or harder. Ease of retrieval has been shown to play a fundamental role in the monitoring and control of memory retrieval. According to Koriat’s FOK accessibility model (1993, 2000), the accessibility of partial or contextual information relevant to the memory target gives rise to this feeling. E.g. immediate or quick access to a song or anthem’s first lines gives rise to the feeling of knowing it. This feeling may motivate the subject to trying to reconstruct the aimed information. Conversely, a person may have a feeling of uncertainty, of not knowing or forgetting, which may motivate her to give up on the mnemonic reconstruction rapidly.
We also feel that a given memory is more or less accurate. Some remembered information feels right, and some feels wrong. It has been found that the fluency with which a memory is reconstructed is a key determinant of the feeling of rightness. “Fluency” refers to the ease with which an information is reconstructed, and thus has been mainly measured through reaction time, i.e. the speed of the mnemonic reconstruction. A memory that is reconstructed fluently (i.e. quickly) feels right, whereas a reconstruction that isn’t fluent feels dubious (Benjamin, Bjork, & Schwartz, 1998; Kelley & Lindsay, 1993; Whittlesea & Leboe, 2003). When episodic retrieval is accompanied by a feeling of wrongness or doubt, this tends to motivate a revision (Gallo & Lampinen, 2015).
The feeling of forgetting also plays an important role in remembering. When we try to mentally reconstruct lists of items, scenes, or to-do lists, we often have a feeling that we have forgotten something (Halamish, McGillivray, & Castel, 2011). This feeling motivates the revision of the mnemonic reconstruction to check if something is missing, or casts doubt on the memory’s integrity.
All these metacognitive feelings are of vital importance in the production and evaluation of the mnemonic reconstruction and the further decision of whether to endorse the reconstructed information (Michaelian, 2012). Our claim is that when a subject is remembering guided by these metacognitive feelings, she is actually exerting agentive automatic control over her mental mnemonic processes, since said feelings tend to resolve felt tensions and guide mnemonic reconstruction toward reliability. Thus, given the centrality of agentive control for action, when the mental processes of imaginative reconstruction that constitute remembering are monitored and controlled using metacognitive feelings, remembering counts as a mental action.
5. Conclusion
The main reason for some philosophers to consider that memory is just a passive information retention and retrieval capacity is its automaticity. The apparent “ballistic” nature of the production of mental content seems to leave no room for agency and control. In opposition to this view, we have defended an active account of memory according to which remembering is a mental action. Our claim differs from a “simple-connection” view because we hold that agents cannot only control the top-down initiation of remembering, but can also monitor and guide the content-production process by means of intuitive metacognitive feelings.
Memory, as an imaginative reconstruction of past experience, requires some kind of metacognitive control to ensure its reliability. Drawing from recent work on the normativity of automaticity and automatic control, we distinguish two kinds of metacognitive control: top-down, reflective control, and automatic, intuitive, feeling-based control. Thus, we propose that whenever agents control their mental remembering processes by means of intuitive or feeling-based metacognition, this is an action.
This does not entail that metacognitive feelings are perfect guides to reliable memories. We often misattribute reliability levels to our memories, and some people are surely better than others at producing reliable memories; but to the extent that we can intentionally improve the reliability of our recollections, it is largely thanks to our metacognitive intuitions and feelings.
References
Arango-Muñoz, S. (2011). Two levels of metacognition. Philosophia, (39), 71–82. https://doi.org/10.1007/s11406-010-9279-0
Arango-Muñoz, S. (2013). Scaffolded Memory and Metacognitive Feelings. Review of Philosophy and Psychology, 4(1), 135–152. https://doi.org/10.1007/s13164-012-0124-1
Arango-Muñoz, S. (2014). The nature of epistemic feelings. Philosophical Psychology, 27(2), 1–19. https://doi.org/10.1080/09515089.2012.732002
Baddeley, A. (2007). Working memory, thought, and action. Oxford: Oxford University Press.
Bargh, J. A. (1992). The ecology of automaticity: Toward establishing the conditions needed to produce automatic processing effects. The American Journal of Psychology, 181–199.
Bargh, J. A. (1997). The automaticity of everyday life. In R. S. Wyer (Ed.), Advances in Social Cognition (pp. 1–62). Lawrence Erlbaum Associates.
Bargh, J. A., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action. Journal of Personality and Social Psychology, 71(2), 230–.
Benjamin, A. S., Bjork, R. A., & Schwartz, B. (1998). The mismeasure of memory: When retrieval fluency is misleading as a metamnemonic index. Journal of Experimental Psychology: General, 127, 55–68.
Bresner, D., & Stolz, J. A. (1999). What kind of attention modulates the Stroop effect? Psychonomic Bulletin & Review, 6(1), 99–104.
Brownstein, M., & Madva, A. (2012). The normativity of automaticity. Mind and Language, 27(4), 410–434.
Campbell, J. (1999). Schizophrenia, the space of reasons and thinking as a motor process. The Monist, 82(4), 609–25.
Carruthers, P. (2009a). Action-awareness and the active mind. Philosophical Papers, 38(2), 133–156. https://doi.org/10.1080/05568640903146443
Carruthers, P. (2009b). How we know our own minds: The relationship between mindreading and metacognition. Behavioral and Brain Sciences, 32(2), 121. https://doi.org/10.1017/S0140525X09000545
Carruthers, P. (2011). The Opacity of Mind: An Integrative Theory of Self-Knowledge. Oxford: Oxford University Press.
Clark, A. (2015). What the “extended me” knows. Synthese. https://doi.org/10.1007/s11229-015-0719-z
De Brigard, F. (2014). Is memory for remembering? Recollection as a form of episodic hypothetical thinking. Synthese, 19(2), 155–185. https://doi.org/10.1007/s11229-013-0247-7
Del Cul, A., Dehaene, S., Reyes, P., Bravo, E., & Slachevsky, A. (2009). Causal role of prefrontal cortex in the threshold for access to consciousness. Brain, 132(9), 2531–2540.
Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59, 255–278.
Evans, J. S. B. T. (2010a). Intuition and reasoning: A dual-process perspective. Psychological Inquiry, 21(4), 313–326.
Evans, J. S. B. T. (2010b). Thinking Twice: Two minds in One Brain. Oxford: Oxford University Press.
Evans, J. S. B. T., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8(3), 223–241.
Fernández Cruz, A. L., Arango-Muñoz, S., & Volz, K. (2016). Oops, scratch that! Monitoring one’s own errors during mental calculation. Cognition, 146, 110–120. https://doi.org/10.1016/j.cognition.2015.09.005
Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34(10), 906–911. https://doi.org/10.1037//0003-066x.34.10.906
Francolini, C. M., & Egeth, H. (1980). On the nonautomaticity of `automatic’ activation: Evidence of selective seeing. Perception and Psychophysics, 27, 331–342.
Gallo, D. A., & Lampinen, J. M. (2015). Three Pillars of False Memory Prevention:: Orientation, Evaluation and Corroboration. In J. Dunlosky & S. K. Tauber (Eds.), The Oxford Handbook of Metamemory. Retrieved from https://doi.org/10.1093/oxfordhb/9780199336746.013.11
Gendler, T. S. (2008a). Alief and belief. Journal of Philosophy, 105(10), 634–663.
Gendler, T. S. (2008b). Alief in action (and reaction). Mind and Language, 25(5), 552–585.
Gordon, A. M., & Soechting, J. F. (1995). Use of tactile afferent information in sequential finger movements. Experimental Brain Research, 107(2), 281–292.
Halamish, V., McGillivray, S., & Castel, A. D. (2011). Monitoring One’s Own Forgetting in Younger and Older Adults. Psychology and Aging. https://doi.org/10.1037/a0022852
Hampton, R. R. (2001). Rhesus monkeys know when they remember. Proceedings of the National Academy of Sciences U.S.A, 98, 5359–5362.
Intraub, H., Bender, R. S., & Mangels, J. A. (1992). Looking at pictures but remembering scenes. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 180–191.
Jeannerod, M. (2006). Motor cognition. Oxford: Oxford University Press.
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar: Straus and Giroux. Retrieved from http://doi.org/10.1007/s13398-014-0173-7.2
Kahneman, D., & Henik, A. (1981). Perceptual organization and attention. In M. Kubovy & J. Pomerantz (Eds.), Perceptual Organization and Attention. Erlbaum.
Kelley, C. M., & Lindsay, S. D. (1993). Remembering mistaken for knowing: Ease of retrieval as a basis for confidence in answers to general knowledge. Journal of Memory and Language, 32, 1–24.
Kirsh, D., & Maglio, P. (1994). On distinguishing epistemic from pragmatic action. Cognitive Science, 18, 513–549.
Koch, C., & Crick, F. (2001). On the zombie within. Nature, 41(6840), 893–893.
Koriat, A. (1993). How do we know that we know? The accessibility model of the feeling of knowing. Psychological Review, 100(4), 609–639.
Koriat, A. (2000). The feeling of knowing: Some metatheoretical implications for consciousness and control. Consciousness and Cognition, 9(2), 149–171.
LaBerge, D., & Samuels, S. J. (1974). LaBerge, D. & Samuels, S. J. (1974). “Toward a theory of automatic information processing in reading.” Cognitive psychology 6:2, 293–323. Cognitive Psychology, 6(2), 293–323.
Logan, G. D., & Cowan, W. B. (1984). Logan, G. D. & Cowan, W. B. (1984). “On the ability to inhibit thought and action: A theory of an act of control.” Psychological review 91:3, 295. Psychological Review, 91(3), 295–?????
Logan, G. D., & Crump, M. J. C. (2010). Cognitive Illusions of Authorship Reveal Hierarchical Error Detection in Skilled Typists. Science, 330, 683–686. https://doi.org/10.1126/science.1190483
MacLeod, C. M. (1991). Half a century of research on the Stroop effect: an integrative review. Psychological Bulletin, 109(2), 163–203.
Mele, A. R. (1997). Agency and mental action. Nous, 31, 231–249.
Mele, A. R. (2009). Mental action: a case study. In Mental actions (O’Brien L. & Soteriou M., pp. 17–37). Oxford: Oxford University Press.
Michaelian, K. (2011). Generative memory. Philosophical Psychology, 24(3), 323–342.
Michaelian, K. (2012). Metacognition and endorsement. Mind and Language, 27(3), 284–307.
Michaelian, K. (2016). Mental Time Travel: Episodic Memory and our Knowledge of the Personal Past. Cambridge, Massachusetts: The MIT Press.
Moors, A., & De Houwer, J. (2006). Automaticity: a theoretical and conceptual analysis. Psychological Bulletin, 132(2), 297–?????
Mossel, B. (2005). Action, control and sensation of acting. Philosophical Studies, 124, 129–180.
Nagel, J. (2014). Intuition, reflection, and the command of knowledge. Proceedings of the Aristotelian Society, 88, 217–239.
Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and new findings. The Psychology of Learning and Motivation, 26, 125–173.
Norman, D. A., & Shallice, T. (1986). Attention to action: Willed and automatic control of behavior. In R. J. Davidson, G. E. Schwartz, & D. Shapiro (Eds.), Consciousness and Self-Regulation (pp. 1–18). Springer.
Pacherie, E. (2008). The phenomenology of action: A conceptual framework. Cognition, 107, 179–217. https://doi.org/doi:10.1016/j.cognition.2007.09.003
Paynter, C. A., Reder, L. M., & Kieffaber, P. D. (2009). Knowing we know before we know: ERP correlates of initial feeling-of-knowing. Neuropsychologia, 47(3), 796–803. https://doi.org/10.1016/j.neuropsychologia.2008.12.009
Peacocke, C. (2007). Mental Action and Self-Awareness (I). In Contemporary Debates in the Philosophy of Mind (Brien McLaughlin & Jonathan Cohen). Blackwell.
Perner, J., & Aichorn. (2008). Theory of mind, language and the temporo-parietal junction mystery. Trends in Cognitive Sciences, 12(4), 123–126.
Posner, M. I., & Snyder, C. R. R. (1975). Attention and cognitive control”. In Information processing and cognition: The Loyola symposium. In R. Solso (Ed.), Information processing and cognition: the Loyola Symposium. Erlbaum.
Proust, J. (2005). La nature de la volonté. Paris: Folio-Gallimard.
Proust, J. (2009). Is there a sense of agency of thought? In L. O’Brien & M. Soteriou (Eds.), Mental actions and agency (pp. 253–279). Oxford: Oxford University Press.
Proust, J. (2012). Metacognition and mindreading: one or two functions? In M. Beran, J. Brandl, J. Perner, & J. Proust (Eds.), The foundations of metacognition (pp. 234–250). Oxford: Oxford University Press.
Proust, J. (2013). The Philosophy of Metacognition: Mental Agency and Self-Awareness. Oxford: Oxford University Press.
Rabbitt, P. (1966a). Error correction time without external error signals. Nature, 212, 438–XXX.
Rabbitt, P. (1966b). Errors and error correction in choice-response tasks. Journal of Experimental Psychology, 71(2), 264–XXX.
Rabbitt, P. (1990). Age, IQ and awareness of errors. Ergonomics, 33(10), 1291–1305.
Rabbitt, P. (2002). Consciousness is slower than you think. The Quarterly Journal of Experimental Psychology, 55(4), 1081–1092.
Reder, L. M. (1987). Strategy selection in question answering. Cognitive Psychology, 19(1), 90–138. https://doi.org/10.1016/0010-0285(87)90005-3
Rhodes, G. (1996). Superportraits: Caricatures and recognition. Hove: Psychology Press.
Rietveld, E. (2008). Situated normativity: The normative aspect of embodied cognition in unreflective action. Mind, 117(468), 973–1001.
Roediger, H. L., & McDermott, K. B. (1995). Creating false memories: Remembering words not presented in lists. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 803–814.
Schacter, D. L., & Addis, D. R. (2007). The cognitive neuroscience of constructive memory: remembering the past and imagining the future. Philosophical Transactions of the Royal Society B, 262, 773–786. https://doi.org/10.1098/rstb.2007.2087
Schneider, W., & Shiffrin, R. M. (1977). Controlled and automatic human information processing: I. Detection, search, and attention. Psychological Review, 84(1), 1–66.
Schnyer, D. M., Verfaellie, M., Alexander, M. P., LaFleche, G., Nichols, L., & Kaszniak, A. W. (2004). A role for right medial prefrontal cortex in accurate feeling-of-knowing judgments: evidence from patients with lesions to frontal cortex. Neuropsychologia, 42, 957–966.
Schwartz, B., & Metcalfe, J. (2010). Tip-of-the-tongue (TOT) states: Retrieval, behavior, and experience. Memory and Cognition, 39, 737–749.
Searle, J. (1983). Intentionality. Cambridge: Cambridge University Press.
Shea, N., Boldt, A., Bang, D., Yeung, N., Heyes, C., & Frith, C. D. (2014). Supra-personal cognitive control and metacognition. Trends in Cognitive Sciences, 18(4), 186–193. https://doi.org/http://dx.doi.org/10.1016/j.tics.2014.01.006
Shepherd, J. (2014). The contours of control. Philosophical Studies, 170(3), 395–411. https://doi.org/10.1007/s11098-013-0236-1
Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychological Review, 84(2), 127–194.
Smith, J. D. (2009). The study of animal metacognition. Trends in Cognitive Sciences, 13(9), 389–396. https://doi.org/10.1016/j.tics.2009.06.009
Smith, J. D., Beran, M. J., Redford, J. S., & Washburn, D. A. (2006). Dissociating uncertainty responses and reinforcement signals in the comparative study of uncertainty monitoring. Journal of Experimental Psychology: General, 135, 282–297.
Smith, J. D., & Washburn, D. A. (2003). The comparative psychology of uncertainty monitoring and metacognition. Behavioral and Brain Sciences, 26, 317–373.
Souchay, C., Guillery-Girard, B., Pauly-Takacs, K., Wojcik, D. Z., & Eustache, F. (2013). Subjective experience of episodic memory and metacognition: a neurodevelopmental approach. Frontiers in Behavioral Neuroscience, 7, 1–16. https://doi.org/10.3389/fnbeh.2013.00212
Stanovich, K. E. (2004). The Robot’s Rebellion: Finding Meaning in the Age of Darwin. University of Chicago Press. University of Chicago Press. Retrieved from http://doi.org/10.1007/s13398-014-0173-7.2
Strawson, G. (2003). Mental Ballistics or the Involuntariness of Spontaneity. Proceedings of the Aristotelian Society, New Series, 103, 227–256.
Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18(6), 643–662. https://doi.org/10.1037/h0054651
Suddendorf, T., & Corballis, M. C. (2007). The evolution of foresight: what is mental time travel, and is it unique to humans? Behavioral and Brain Sciences, 30, 299–313. https://doi.org/10.1017/s0140525x07001975
Walsh, M. M., & Anderson, J. R. (2009). The strategic nature of changing your mind. Cognitive Psychology, 58, 416–440.
Whittlesea, B. W. A., & Leboe, J. P. (2003). Two fluency heuristics (and how to tell them apart). Journal of Memory and Language, 49, 62–79.
Wolpert, D. M., & Ghahramani, Z. (2000). Computational principles of movement neuroscience. Nature, 3, 1212–1217.
Wolpert, D. M., Ghahramani, Z., & Jordan, M. I. (1995). An internal model for sensorimotor integration. Science, 269, 1880–1882.
Wu, W. (2013). Mental Action and the Threat of Automaticity. In A. Clark, J. Kiverstein, & T. Vierkant (Eds.), Decomposing the Will (pp. 244–261). Oxford: Oxford University Press.
Wu, W. (2016). Experts and Deviants: The Story of Agentive Control. Philosophy and Phenomenological Research, 93(1), 101–126. https://doi.org/10.1111/phpr.12170
Footnotes
[1] This distinction between “trying to X” and “trying to bring it about that X”, plus the claim that one can do only the latter with respect to memory, led Mele to revise his earlier view (Mele, 1997) and conclude that remembering is never a mental action.
[2] The picture has become much more complicated than a duality of opposites. However, the original description of the concepts as opposites (Schneider & Shiffrin, 1977; Shiffrin & Schneider, 1977) still bears influence on contemporary research, and is therefore worth taking as a starting point for conceptual analysis.
[3] Schneider and Shiffrin defined an automatic process as an activation of a series of neural nodes in which “[t]he sequence of nodes (nearly) always becomes active in response to a particular input configuration, where the inputs may be externally or internally generated and include the general situational context” (Schneider & Shiffrin, 1977, p. 2). The view has remained influential: “The essence of TASS [=intuitive] subprocesses is that they trigger whenever their appropriate stimuli are detected, that they cannot be selectively ‘turned off’” (Stanovich, 2004, p. 52). In a review of literature on automaticity in social behaviour, Bargh et al. (1996, p. 252) find that “[r]ecent research has shown that attitudes and other affective reactions can be triggered automatically by the mere presence of relevant objects and events, so that evaluation and emotion join perception in the realm of direct, unmediated psychological effects of the environment.”—a view that Gendler (2008a, p. 644) quotes approvingly.
[4] For the context-dependence of Stroop effects, see Francolini & Egeth (1980), Kahneman & Henik (1981), and Besner & Stolz (1999). These studies contributed to the rejection of the traditional view that automaticity is attention-independent (LaBerge & Samuels, 1974; Posner & Snyder, 1975; Schneider & Shiffrin, 1977; Shiffrin & Schneider, 1977).
[5] For Shiffrin & Schneider, “[s]ome automatic processes may be initiated under subject control, but once initiated all automatic processes run to completion automatically” (Shiffrin & Schneider, 1977, p. 160). Norman and Shallice (1986) consider that, since automatic processes run to completion once triggered, any kind of error-correction requires deliberate attentional control. Bargh refers to this as automaticity’s “autonomy” (1992, p. 186), and suggests that it is its one core feature. For Stanovich, “TASS [i.e. intuitive] processes cannot be turned off or interfered with by central systems. Their operation is obligatory when triggered by the relevant stimuli […]. TASS processes tend to be ballistic—they run to completion once triggered and cannot be aborted in midcourse” (Stanovich, 2004, p. 39). (See also Logan & Cowan, 1984; Moors & De Houwer, 2006, pp. 301–302).
[6] For other cases of automatic error detection and correction in sensorimotor processes, see Gordon & Soechting (1995), Rabbitt (1990), and Logan & Crump (2010).
[7] Notice that this dual-process approach is significantly different from coarser dual-system approaches. (On the general outlook, and the difference between dual-system and dual-process accounts, see Evans (2008, 2010b); Evans & Stanovich (2013).
[8] The implicit and the feeling-based views of the low-level metacognition are different. The former considers that there can be control in the total absence of consciousness, whereas the latter proposes that feelings are the outputs of metacognition and influence control processes. For space reasons, henceforth we will endorse the feeling-based view without arguing for it.
[9] The literature employs a distinction between ‘system 1’ and ‘system 2’ metacognition, so we use those terms here, although generally we favor a distinction in terms of process types, consistently with our preference for dual-process over dual-system accounts of cognition (Evans & Stanovich 2013).
[10] Some researchers consider that studying and recalling word lists or sentences is a means to study episodic memory and not only semantic. Although we agree that studying and recalling word lists may indirectly inform our knowledge of episodic memory, it is not the most straightforward way to study this type of memory.
[/expand]
[expand title=”Invited Comments from Felipe De Brigard (Duke)” elwraptag=”p”]
Comments on “Remembering as a mental action”
Felipe De Brigard
Duke University
There is much to like about this paper. My comments aim at spurring discussion and request some clarification.
1. First, a general question about the possibility of truly automatic and ballistic cognitive processes. In their paper, Arango and Bermudez (A&B henceforth) define—or, rather, side with the traditional definition of—automatic as “those processes that are invariably triggered by the same kind of stimuli and respond to them in systematically the same way” (p. 4). And then, in section 2.3, they introduce the possibility of having controlled automatic processes, which are to be separated from the truly ballistic processes, which cannot be controlled. Memory construction, they go on to say, can be controlled and automatic, and thus can be considered an action—which wouldn’t have been the case if such a process was automatic but uncontrollable, i.e., ballistic. Now, this automatic control appears to be, if I understand correctly, sub-threshold error correction; very rapid computations that help to filter out noise to stay on task at a speed and/or level of processing that makes it, if not impossible, at least counterintuitive to say that one exerts direct control upon them. But, if so, then I wonder whether there are truly un-controlled and automatic processes ever going on in the brain or, more generally, in our nervous system. After all, there is error correction at pretty much every level of description in cognitive neuroscience: one can think of astrocytes as helping to correct for “errors” in extra-cellular space so that ionic balance is restored, thereby allowing synapses to occur. One can think of pyramidal neurons in the visual cortex as correcting or “minimizing” error from visual stimuli as a matter of course; there is even evidence for error minimization at the level of neuronal population, in a correcting process that occurs in the order of milliseconds (Egner et al., 2010, J Neuro). It seems to me, then, that we can describe many of the computations that go all the way to the most peripheral of our afferent terminals in terms of automatic, yet controlled, processes. After all, our brains have evolved to account for the variability of internal and external context, and they did so by generating successful strategies for error correction or minimization. But if there is error correction everywhere, would anything qualify as ballistic anymore? Just wondering.
2. A quick one: It occurred to me that there is an interesting asymmetry in the ballistic case of perception and that of imagination. Consider the Müller-Lyer illusion. You have seen it many times; you’ve studied it and know that the lines are the same length. And now suppose someone asks you not to see them as unequal. You can’t. Just as the person who has seen them for the very first time, and does not know that they are the same length, you can’t help but see them as unequal too. Knowing the actual lengths of the lines does not impact your incapacity to prevent you from seeing them as unequal. But the same is not the case when you try not to imagine a polar bear, for the incapacity to not see the polar bear in your mind’s eye depends on your knowledge of polar bears. If you have never seen a polar bear, don’t know how they look, etc., asking you not to imagine a polar bear shouldn’t be that hard. Maybe this is just an argument for imaginative knowledge depending on perceptual knowledge; or maybe just a curious thought.
3. A minor comment about the first reason against the simple-connection strategy for answering the problem of mental ballistic (sec. 2.2). According to the “simple-connection reply”, a “certain mental process counts as an action if the produced mental content corresponds to the intention’s representational content, since this implies that the agent has selected the right content, i.e. has directed her attention to the proper content.”. But, A&B argue, “there are many cases of controlled remembering where there’s no apparent intention beforehand”, and they give the following example: “I might have the intention to go back home to retrieve my credit card. But then, as I am getting ready to go, I suddenly remember that the card is maxed out; so I revise my intention to go back home. In this case, the mnemonic process was not structured top-down by my intention: the memory’s content is not a proper solution to a selection problem established by my intention.” And the reason, I take it, is because the mental content of the intention does not correspond to the retrieved mental content.
I find this case odd, for there seems to be an equivocation between the content of the intention to go find the credit card, and the content of the intention to remember the credit card. If my intention is to go find my credit card, then the relevant content that needs to be assessed as appropriate or not is whatever content satisfies the intention to go find my credit card. But this is different from my intention to remember my credit card, which needs to be assessed by whether or not I successfully remembered my credit card. My sense is that my intention to go find my credit card is a complex mental process that may have, as part of its content, a mental representation of my credit card. In that sense, it is likely that in order to successfully find my credit card at home I need to first remember my credit card—remember what it looks like, where I may have left it, etc. But remembering the credit card is not sufficient to satisfy the content of my intention to go find it. Thus, when I remember my credit card and, incidentally, I remember that it maxed out, I am in fact satisfying my intention to remember my credit card, which is a (perhaps necessary) component of my intention to go find my credit card, but it is neither sufficient nor equivalent to such an intention. And, therefore, it is not the appropriate mental content to assess the accuracy of the intention to find the credit card.
By the way, how are we measuring “correspondence” between the intended and the produced mental contents? Because it seems to me that there are lots of things one intends to do, and has, as it were, an idea in mind, and the result ends up not coinciding exactly with the intended idea. Have you ever seen that TV show on HGTV—“Property Brothers” I think it is—where one of the architects plans a renovation on a computer program, then brings it about, and then, to the astonishment of the owners of the property—and the viewers!—superimposes the digital rendition of, say, the kitchen against the image of the renovated kitchen? The latter matches the former, but not exactly. Nevertheless, despite this mismatch, one wouldn’t say that the resultant renovation wasn’t intended by the architect (and TV personality) that designed it, right?
4. One, final and brief comment, related to the first one. According to A&B, metacognition enables the controlled guidance of the automatic processes responsible for memory’s reconstruction. But if there is controlled correction at even earlier stages of the retrieval process, why not count those processes as also guiding the automatic reconstruction of episodic memory? Why privilege meta-cognition?
[/expand]
[expand title=”Invited Comments from Kourken Michaelian (Otago)” elwraptag=”p”]
Commentary on Santiago Arango-Muñoz and Juan Pablo Bermúdez, Remembering as a mental action
Kourken Michaelian
University of Otago
The question with which Arango-Muñoz and Bermúdez are concerned is whether remembering qualifies as a mental action. While some of my own work serves as one source of ammunition for their defence of a positive answer to the question, it does not (as far as I can tell) straightforwardly commit me to endorsing that answer. Nevertheless, the claim that remembering is a mental action strikes me as plausible, and I want in this commentary to suggest two ways in which their already persuasive argument for it might be further strengthened.
As Arango-Muñoz and Bermúdez see things, the question whether remembering is a mental action is bound to arise once we accept the constructive view of memory which has long been dominant in psychology and which appears to be on its way to dominance in philosophy as well. It arises with particular force if we accept the simulation theory of memory, which characterizes episodic memory as a form of episodic hypothetical thought (De Brigard 2014) or episodic imagination (Michaelian 2016). The reasoning is the following. Imagining is a “ballistic” mental process in Stanovich’s (2004) sense: it “run[s] to completion once triggered and cannot be aborted in midcourse”; that is, it is, once triggered, not subject to control by the agent. But mental action, like action in general, implies agential control over the process. So imagining is not a mental action. And if memory is a form of imagination, this straightforwardly implies that remembering is not a mental action.
This, to be clear, is the reasoning of one who denies that remembering is a mental action. Arango-Muñoz and Bermúdez themselves seek to undermine it by pointing to the role of metacognition in remembering. They grant that there is an important sense in which the process of remembering is automatic but argue that there is nevertheless also an important sense in which it is controlled. The fact that memory is a form of imagination, they argue, implies that there is always some probability that any given output of remembering is inaccurate. There is thus a need for metacognitive monitoring and control of remembering. Distinguishing between (what we can refer to as) type 2 and type 1 metacognition, they argue that type 1 metacognition, in the form of metacognitive feelings, plays a role in controlling the process of remembering and ensuring the accuracy of its outputs. Remembering is thus not ballistic and may qualify as a mental action.
There is an interesting question here about whether this argument extends to other forms of imagination, but I will not take this up. Instead, I want to briefly point to two ways in which Arango-Muñoz and Bermúdez’s argument for the claim that remembering is a form of mental action might be strengthened.
First, the motivation for the focus on type 1 metacognition to the exclusion of type 2 metacognition is unclear. The type 1/type 2 distinction is notoriously slippery, but there is clearly a distinction to be drawn between feeling-based forms of metacognition (type 1) and reasoning-based forms of metacognition (type 2). And both forms of metacognition are clearly involved in memory. Arango-Muñoz and Bermúdez point to a number of epistemic feelings that play a role in remembering. But forms of metacognition such as source monitoring at least sometimes involve explicit reasoning from features of retrieved representations to conclusions about their probable origins in different sources (e.g., first-hand experience vs. second-hand testimony) and hence decisions about whether to accept or reject them. If the involvement of type 1 metacognition in memory implies that remembering is not a ballistic process, presumably the involvement of type 2 metacognition would presumably do so as well. Moreover, whereas, in type 1 metacognition, only the outputs of the monitoring process—metacognitive feelings—are available at the level of the agent, in type 2 metacognition, the monitoring process itself can be seen as unfolding at the level of the agent; the involvement of type 2 metacognition thus potentially provides a more straightforward means of making a case for the the claim that remembering is a mental action.
Second, I want to suggest that Arango-Muñoz and Bermúdez’s argument may overlook certain forms of control that are exerted by agents over the process of remembering. In principle, the agent might exercise control over the process at three points: he might launch it, he might terminate it, or he might direct it as unfolds. Arango-Muñoz and Bermúdez seem to concede that launching a process does not suffice for having control over it in the sense that matters here (a ballistic process might be deliberately launched). Some of the forms of control that they have in mind amount to ways of terminating the process (see, e.g., their discussion of the feeling of error). Others may amount to ways of directing the process as it unfolds, but they seem on the whole to accept the suggestion that agents have little control over the content that is generated by the process of remembering once it has been launched. The motivation for this view, however, is unclear. Consider just two examples of control over the contents of retrieved memories. First, subjects often switch between field perspective and observer perspective when remembering (McCarroll 2017), and there would seem to be nothing to prevent them from doing so as a matter of choice. Second, episodic remembering often shades into episodic counterfactual thinking (De Brigard 2014), and there would seem to be nothing to prevent subjects from choosing those details of an event that are altered when they consider counterfactual outcomes to past events. So it would seem that we do have some degree of control over the content of retrieved memories. Again, this more robust form of control would seem to provide additional evidence for the claim that remembering is a mental action.
References
De Brigard, F. 2014. Is memory for remembering? Recollection as a form of episodic hypothetical thinking. Synthese 19(2): 155–185.
McCarroll, C. J. 2017. Looking the past in the eye: Distortion in memory and the costs and benefits of recalling from an observer perspective. Consciousness and Cognition 49: 322–332.
Michaelian, K. 2016. Mental Time Travel: Episodic Memory and Our Knowledge ofthe Personal Past. MIT Press.
Stanovich, K. E. 2004. The Robot’s Rebellion: Finding Meaning in the Age of Darwin. University of Chicago Press.
[/expand]
Thanks very much for this paper, Santiago and Juan Pablo, which I enjoyed reading (also, very nice presentation). I hope you won’t mind if I expand a bit on my view that you discuss and set aside in section 2.2 (thanks for engaging that work, by the way). I think the view you reject, suitably elaborated, can be deployed to helpfully articulate your automatic metacognitive control model.
We agree that the notion of automaticity is essential to understanding action. I would emphasize that it needs to be understood technically. First, a pitfall: trying to draw a division between automatic and controlled kinds of processes. That division has been hard to sustain for when one posits an automatic process kind, another researcher often comes up with situations in which instances of the process seem to be controlled, namely subject to different kinds of top-down modulation. I don’t think this division of kinds of processes is ultimately sustainable and we should drop it in science and philosophy. Automaticity and control will often be intertwined and our definitions of those terms should reflect this.
If I may, the crucial steps left out in your discussion of my view are the definitions and in particular the relativization of automaticity and control to features or properties of a process. Once we make this conceptual clarification we can see why a simple connection holds, namely that automaticity implies the absence of control, and why a process can be both automatic and controlled.
Take a feature F of a process exemplified at a time t. The simple connection entails that if the feature is automatic at t then it cannot be controlled at t. But for all features G of the process, some will be automatic and some will be controlled. In the limiting case, all features can be automatic, in which case there is a sense in which the process token is completely automatic. Most processes will have automatic and controlled elements. All that is left is to define control and automaticity. Begin with control:
[AGENTIVE CONTROL] a mental process is agentively controlled relative to feature F at t if and only if the process’s having F at t is the result of an intention to produce a process as having F.
Given the simple connection, if F is not controlled then it is automatic. On this account, every intentional action will have automatic features as well as a smaller number of controlled features. Note also, that the definition allows one to talk about degrees of control relative to the set of controlled features [cf. Kourken’s second “strengthening’ point and whether control of just initiation would suffice for agentive of control…sure but automaticity dominates more than if you controlled termination too]. On the flipside, you can also emphasize the automatic features and if you’d like the level of ballistics.
[related sidenote: when Felipe asked whether there could be any purely automatic process given error correction mechanisms I think the definition shows that we can talk intelligibly about pure automaticity in a token process relative to specifying the appropriate concept of control… In principle a token process might actually never be influenced by error correction mechanisms even if that type of process is always in principle susceptible to error correction… So the process is purely automatic as instantiated even if there isnt any interesting sense that it is of an automatic kind relative to error correction].
Here’s the part that might help bring our accounts together. We can understand an account of control as specifying a type of top down modulation relative to a pre-specified hierarchy. This means that if we are interested in control in respect of other capacities, we need only redefine control relative to the level at which those capacities reside. So:
[C-CONTROL] +if a control capacity C resides at level n, and a target process at level n–1, then we can speak about feature F of the target process as controlled by C(F) if the right top down causal process occurs.
Thus, there’s nothing special about intention in the analysis of control as such. I speak of intention because my target in the 2013 article was intentional control, but a psychologist can specify the schema for control relative to any type of top down modulating capacity against the background of an agreed upon hierarchy. So, we can speak about metacognitive control, as you theorize and we can also use this invocation of metacognitive control to expand our conception of agentive control. One could do the same for beliefs, desires, hopes, emotions and so forth. In the end, any technical discussion about control will just need to be clear about the type of control at issue relative to the top down modulating capacity and the feature which will amount to the controlled feature for a given process affected by that capacity.
If we think of remembering as an extended and reconstructive process, as you suggest, then at different moments in time as it requires certain features that characterize how it unfolds over time, we will see the play of automaticity and control over time that gives us a sense of the richness of agency in different instances of remembering. The control that identifies the process as an intentional action is that as a successful remembering it exemplifies the property of being a remembrance of the intended kind [say my remembering what I had for dinner last night though my remembering the specific content might be automatic since I can’t intend to remember that content without nearby in fact fulfilling my intention]. In the course of remembering, other properties come into view because they reflect other types of top down modulation say if remembering requires more effort, more priming, or additional metacognitive influences as postulated in the target article. That’s an interesting move and expands our conception of agentive control. I’ll have to think more about the empirical evidence you present but it is certainly an interesting move.
OK sorry that was a bit long but I wanted to emphasize that the view criticized in section 2.2 is in fact something that I think you can deploy to your advantage. So why not take on board the definitions?
Wayne
Wayne, thank you for your thoughtful and detailed comments! It’s great having your take on our work.
Your view of automaticity and control as pertaining to features of a given process (and not to process types) has several advantages, and you are right that we did not discuss it in the paper. We will have to think more about your suggestion to drop the distinction between process types and adopt a relativized account. Thanks for the suggestion!
Now, in the paper we focus on asking the question: In a process of remembering, what aspects of the process instantiate agentive control? Your comment reiterates the single-connection view by stating that “The control that identifies the [remembering] process as an intentional action is that as a successful remembering it exemplifies the property of being a remembrance of the intended kind” [our emphasis]. You then rightly capture our paper’s suggestion: we believe there are reasons to expand our notion of agentive control beyond this. Thus, we want to aim at expanding the proposed definition of agentive control:
“[AGENTIVE CONTROL] a mental process is agentively controlled relative to feature F at t if and only if the process’s having F at t is the result of an intention to produce a process as having F.”
by adding an account of how agents themselves manage to produce intention-responsive mental processes. So, agreed: I exercise agentive control if the mnemonic output relevantly corresponds to my intention [say, I produce an image of risotto and wine that corresponds to my intention to remember what I had for dinner yesterday]. But we can take the account even further, by explaining how we manage to agentively bring such correspondence about. In the particular case of remembering, beyond claiming that agentive control is the ability to produce an intention-responsive episodic reconstruction, we can unpack what this ability consists in, saying that agents produce the appropriate episodic reconstruction by guiding the reconstructive process through metacognitive feelings.
This discussion has helped a lot in being more precise about how to expand the notion of agentive control, and there is plenty for us to keep working on in this direction. Thanks again for your suggestions!
REPLY TO MICHAELIAN
Thanks for your comments, Kirk!
The first question you raise in passing is whether our argument extends to other forms of imagination. It seems quite plausible to us that some form of feeling-based metacognition is also involved in some forms imagination. Take the case imagining objects or places. When one is trying to imagine the Flag Tower of Hanoi, there is a sense in which one is metacognitively aware of doing it correctly or incorrectly. One may feel uncertain whether the image that first comes to mind is the right one. Is that the tower in Hanoi or is it the one in Pisa? One may feel quite sure of being able to imagine a pink elephant, but not so sure of being able to imagine all the details of the façade of Notre Dame church.
You note that the paper downplays the importance of type-2 (or reasoning-based) metacognition, given the important role it plays in controlling memory, e.g. in some instances of source monitoring. We agree that type-2 metacognition “potentially provides a more straightforward means of making a case for the claim that remembering is a mental action”: when you guide your recollection process by means of reasoning, the process should plausibly count as an action. But we decided to focus on feeling-based remembering because, although less obviously agentive, this kind of remembering seems to be more widespread. Explicit reasoning about memory seems to be rare (and perhaps often initiated by a metacognitive feeling); thus, it would warrant a more straightforward defense of remembering as a mental action, but it would cover fewer cases. We think our approach, although less straightforward, has the advantage of allowing us to consider a broader set of mental processes as mental actions. We are happy to grant that both reasoning-based and feeling-based instances of controlled remembering count as mental actions. However, we stress the importance of feeling-based metacognition because it has been less attended to by researchers, and yet constitutes an important source of mental agency.
The other interesting point you raise is that we may have overlooked control over episodic content production, illustrated by the subject’s capacity [1] to switch from subject to observer perspective (McCarroll 2017), or [2] to alter some details when considering counterfactual outcomes of past events (De Brigard 2014). We agree that more attention is needed concerning agentive control over episodic content production. That said, regarding the two aforementioned phenomena, we wonder whether agentive control should be located on other aspects of the processes. Particularly, concerning [1], we wonder whether agentive control should be located on the preparatory intention to reconstruct the scene from a given perspective, rather than on the scene reconstruction itself. Insofar as the answer leans toward the former, control would lean toward the process’ preparatory aspects rather than its production. Concerning [2], counterfactual thinking is usually taken to be a type-2 process (e.g. Stanovich (2009) holds it to require “cognitive decoupling”, a paradigmatic feature of type-2 processes), so the capacity to produce a counterfactual episodic scene may require type-2 agentive control. More importantly, it may be argued that, in “choosing those details of an event that are altered when they consider counterfactual outcomes to past events”, agents exercise most of their control in the choosing itself, which would count as a preparatory step for content production. — These are merely initial reactions to intriguing questions, which would certainly benefit from further empirical study.
Works Cited
Stanovich, K. E. (2009). Distinguishing the reflective, algorithmic, and autonomous minds: Is it time for a tri-process theory. In In two minds: Dual processes and beyond (ed. J St. B. Evans & K. Frankish). Oxford University Press, 55-88.
Thanks for your comments, Felipe!
Your first question can be summarized in one sentence: “But if there is error correction everywhere, would anything qualify as ballistic anymore?” The motivation for this question is that there are many error-correction mechanisms working at many levels of physical implementation: from the level of cognitive systems to the neuronal level. Since these are all cases of “sub-threshold error correction”, it seems that there is no room left for ballistic automatic processes, given the pervasiveness of sub-threshold error correction.
This allows us to further clarify something not explicit in the paper, i.e. the relationship between automatic control and metacognitive feelings. We consider it important that the type-1 metacognition that enables automatic control is in many occasions manifested as a feeling (the feeling of error, the feeling of forgetting, etc.). It is these feelings which allow us to claim that the error-detection and -correction processes are susceptible of being controlled by the agent: just like you are able to correct your posture when you ride your bicycle and you feel like you’re falling, likewise you are able to correct your recollection process when you try to remember and feel like you’re forgetting something.
Now, we do not mean to say that the processes underlying episodic recollection are always felt, or that there is a clear-cut distinction between automatic processes that are accompanied by a feeling and those that are not. The same kind of process may be felt in one occasion and not felt in another; there may be individual differences in feeling-based metacognitive capacities; sensibility to the feelings may certainly vary depending on the direction of attention, circumstantial features, other tasks being performed; etc. — In short, some automatic error-correction processes may be imperceptible by the agent, and in that case they would count as ballistic. But we take our point to be that there are instances in which automatic error-correction processes are perceptible, and thereby agentively guided, via metacognitive feelings, and in those cases they would count as controlled.
One more thing about the data about lower levels of control. We are not so familiar with the cases, but it seems to us that the levels you are quoting cannot explain and are not identical to the metacognitive level of control. In order to show this, let us start with a contrast: on the one hand, rhesus monkeys seem to be able to monitor and control their mnemonic reconstruction (Hampton 2001, Smith 2009); on the other hand, capuchin monkeys and pigeons do not seem to be able to monitor and control their mnemonic reconstruction (Inman & Shettleworth, 1999; Smith et al., 2003). It is likely that both rhesus and capuchins share many (most?) of the different subpersonal mechanisms you quoted, yet they differ in that rhesus but not capuchins and pigeons can monitor and control their memory. So, our hypothesis is that although rhesus, capuchins and pigeons share many subpersonal control mechanisms, rhesus monkeys have an error-monitoring and error-control mechanism that capuchins and pigeons lack. In this sense, capuchin monkeys and pigeons’ memory might be interpreted as “ballistic”, whereas rhesus memory can be interpreted as “controlled”. Another interesting fact related to rhesus is that although these monkeys are able to monitor and control their mnemonic processes, they do not seem to possess mental concepts and mind-reading capacity (i.e. they fail false-belief tasks). This suggests that they are monitoring and controlling their mnemonic processes in a non-metarepresentational way, that is, this is type 1 monitoring and controlling and not type 2 metacognition. Our hypothesis is that in these cases rhesus, in analogy with humans, monitor and control their mnemonic processes through metacognitive feelings.
2) You raise the interesting suggestion that perceptual automatic processes (e.g. the Müller-Lyer illusion) may be more ballistic than automatic imagination processes (like imagining a polar bear when asked not to do so). The former seem not to depend on previous knowledge in the way that the latter seem to do: if you don’t know what a polar bear is, you don’t experience the unavoidability of imagining it. Interestingly, however, a series of studies from the 1960s and 70s found group differences in susceptibility to the Müller-Lyer illusion: roughly speaking, the illusion is more ballistic in WEIRD groups than in non-WEIRD groups (Segall et al. 1966; Ahluwalia et al. 1978). One possible (though controversial, see Zeman et al. 2013) explanation is that growing up in urban areas, with perspectives built by 90-degree angles, makes people more susceptible to the illusion than being raised in less urban environments. Be that as it may, the group differences in the perceptual illusion seem to suggest that automatic processes in perception and in imagination may be similarly dependent on past experience or perceptual knowledge.
3) You point out that our example of the credit card against the “simple-connection reply” is inadequate in some respects and motivates further reflection. We seem to be mixing up two different types of intention in this case: the intention to find my credit card at home and the intention to remember my credit card. But both intentions have different satisfaction conditions and therefore should not be conflated, even though one of them may be embedded in the other (the intention of remembering the credit card is embedded in the intention of finding it). We agree with you that they are different intentions (we did not purport to say that they were identical), and that one of them may be embedded in the other. What we wanted to point out with this (unfortunate) case, is the fact that there are some mental processes (like remembering) that may be initiated without being explicitly represented or meta-represented by a prior intention as the simple connection requires, and we thought that the credit card example may be useful for this end. We think that for reasons of cognitive economy it is likely that many cognitive processes are initiated, modulated and/or ended without being explicitly represented in the content of a prior intention.
These considerations lead us to your second worry: “how are we measuring “correspondence” between the intended and the produced mental contents? Because it seems to me that there are lots of things one intends to do, and has, as it were, an idea in mind, and the result ends up not coinciding exactly with the intended idea.” Actually, this idea was in the background of our paper and is what motivated us to suggest that in many cases the outputs of cognitive processes are not explicitly represented in prior intentions, and thus there’s not prior content toward which assess the “correspondence”. Your example of the TV show “Property Brothers” is nice because it is a clear case where the outcome clearly (but slightly) mismatches the plan. Yet the example is not so appropriate in another respect: it is a case of a physical action (the intention is to do something in the world; therefore, the control is done by perceptually comparing the image of the plan directly against the image of the outcome). The cases of mental action that we are interested in are more puzzling: in reasoning, imagining, remembering (among others), the contents of one’s prior intention are not so clearly represented and do not strictly determine what the outcome should look like or should be. Let us give some examples: My intention to “divide 45 into 13” does not explicitly include “3.4” among its constituents, so assessing its satisfaction conditions is not and cannot be a matter of directly assessing its correspondence. The same happens if one aims to construct an argument against Kant’s theory of knowledge, or if one aims to imagine one’s office turned upside down: one does not know what the outcome should look like before one carries out the action. The same happens with remembering: even if there is a sense in which I know it before reconstructing it, there is another sense in which the reconstruction process tells me something that was not present in my prior intention to remember. If I intend to remember what I had for breakfast yesterday, my intention does not already contain the exact content that I aim to remember (i.e. the image of the dish with the food): otherwise there would be no need for recollecting in the first place. These considerations have led us to think that in the case of mental actions (such as reasoning, imagining, remembering) the satisfaction conditions should not be a matter of assessing correspondence between the intentional content and the outcome. We think that more work is needed in the philosophy of mental action solve this problem.
Each of us interpreted this question in a different way, so here we provide two answers that respond to different interpretations:
4.1 The reason for considering that metacognitive processes, even when automatic, are different from mnemonic processes is that these processes can be dissociated. On the one hand, one may be able to reconstruct the past without being able to assess whether the mnemonic reconstruction is right or wrong. On the other hand, one may have very poor reconstructive capacity and still be able to assess when one’s past reconstruction is right or wrong. Psychologists and neuroscientists have proposed that this is because each of these capacities is realized by a different neural mechanism: the mnemonic reconstructive capacity is realized by the hippocampus and sensory cortices, whereas metacognition is realized by prefrontal cortex (see Fleming & Dolan 2012 for a review).
4.2 If there are automatic error-correcting processes that are prior to metacognition, why not also count them as instances of agentive control? — Our answer (closely related to the one we gave to question 1) is that type-1 metacognitive processes count as instances of agentive control insofar as they can be sensed by the agent qua metacognitive feelings. Earlier error-correction processes may also be sensed by the agent, and those may then count as instances of agentive control. But if such earlier processes must remain “sub-threshold”, then they would not be able to count as guiding the action.
Thanks again for your comments, which have given us lots to think about!
Works cited
Ahluwalia, A. (1978). An intra‐cultural investigation of susceptibility to ‘perspective’and ‘non‐perspective’spatial illusions. British Journal of Psychology, 69(2), 233-241.
Fleming, S. M. & Dolan R. J., (2012). The neural basis of metacognitive ability. Philosophical transactions of the Royal Society of London Series B, Biological sciences 367(1594): 1338-1349.
Inman, A., & Shettleworth, J. S. (1999). Detecting metamemory in nonverbal subjects: a test with pigeons. Journal of Experimental Psychology, Animal Behavioral Processes, 25, 389–395.
Hampton, R. R. (2001). Rhesus monkeys know when they remember. Proceedings of the National Academy of Sciences U.S.A, 98, 5359–5362.
Segall, M. H., Campbell, D. T., & Herskovits, M. J. (1966). The influence of culture on visual perception. Indianapolis: Bobbs-Merrill.
Smith, J. D. (2009). The study of animal metacognition. Trends in Cognitive Sciences, 13(9), 389–396. https://doi.org/10.1016/j.tics.2009.06.009
Smith, J. D., Shields, W., & Washburn, D. (2003). The comparative psychology of uncertainty monitoring and metacognition. Behavioral and Brain Sciences, 26(3), 317–339.
Zeman, A., Obst, O., Brooks, K. R., & Rich, A. N. (2013). The müller-lyer illusion in a computational model of biological object recognition. PLOS One, 8(2), e56126.
Thanks for the reply, Juan Pablo. I think we are agreed that there is no conflict between our perspectives on this issue, at least at a conceptual level (I’d like to think more about the interesting empirical background regarding metacognitive control that you raise).
Psychologist in characterizing automaticity/control admit that they don’t have definitions of the distinction. They resort to rough and ready criteria. I don’t think philosophers should simply accept distinctions like type I and type II. The dichotomy ends up being too blunt to capture the interesting dynamics of agency, the give and take between automaticity and control. Once you make the relativization move, you buy yourself a rich set of concepts for characterizing agency. For example, I can imagine someone objecting to your account by noting that the mnemonic processing in question seems pervasively ballistic and using this criterion to argue that the process kind is automatic and hence not a candidate for agency. But on the relativization account that eschews classifying types of processes as automatic or controlled, such a response would not undercut your claim of metacognitive control. Your way out of such objections is precisely a more nuanced view of how to understand that contrast.
As we agree, you can distinguish between metacognitive and intentional control. If you are right, that provides a helpful expansion of conceptually precise ways that we can characterize agency in the mental domain. I would just leave with the final thought that the relativization move must always be embedded in a larger theory of how two systems interact such that one counts as controlling the other, or in the absence of control, how features of the other end up being automatic. We should then think about mechanistic/computational models of interaction between metacognitive and memory systems to flesh out the postulated top down modulation interactions. I suspect that those models in many cases will end up identifying failures of informational encapsulation, something that will be the topic of the third week of this conference. Which is to say those broader issues might be of relevance in development of your theory.
Hello, Santiago & Juan Pablo,
Thanks for this great presentation and excellent paper. The topic of mental action is important to much philosophy of mind and action yet still relatively under-discussed in the literature, so your paper is a welcome addition to a slowly-but-surely growing body of research.
There is much that you say in the paper with which I’m sympathetic, especially your criticisms of ballistic views. However, I’d like to raise a worry that, I think, is akin to the ‘absent agent’ problem that plagues many causal accounts of intentional bodily action.
In your presentation you claim that conscious metacognitive feelings motivate you to do or stop doing something, and that when remembering is guided by these metacognitive feelings, you are exerting agentive automatic control, so that what is taking place is a mental action you perform.
I assume (1) that ‘motivate’ and ‘guide’ are causal notions, and (2) that the conscious experience of the onset of the relevant feelings is a mental event that occurs in consciousness. If so, then the agentive automatic control that you are exerting over the process of remembering is wielded by the occurrence not of anything that you are doing, but by the occurrence of your mental events. But, if that is the case, in what sense can this be control by you, the conscious agent in question? To get that conclusion, it seems we need an additional premise in the argument.
Cheers,
Michael
Dear Michael,
Thanks for your question!
We would like to propose two lines of reply, to see what you think about them.
One line is to say that what we need to do in order for the agent not to disappear is to rephrase (or reinterpret) the claims, so that the claim “when remembering is guided by these metacognitive feelings, you are exerting agentive automatic control” should be interpreted as: “when you guide the remembering process by means of metacognitive feelings, you are exerting agentive automatic control”. The idea is that the conscious experience of the metacognitive feelings is the means by which you, the agent, guide the remembering process. (This should imply that it is the agent, not the metacognitive feelings themselves, who cause the action. We will keep thinking about how to make the causal aspects of the account more precise.)
The other line is the one you suggest, i.e. adding a premise to the argument. We were wondering whether you have a specific premise in mind that we could add, and what that premise might be…
Thanks again!
Dear Michael,
Thank you again for the question.
We would not like to say that motivation and guidance are causal mental states if this means that, once they occur, they cause the mental action directly. We do not have a full-fledged account of motivation and guidance, but when we wrote the paper we were not thinking about these notions as causal in the triggering sense (but not as epiphenomenal either!). There are two notions of causation that are often discussed in the literature on action (e.g. Dretske 1988): triggering cause and structuring cause. On the one hand, the triggering cause is the efficient cause in the Aristotelian sense; on the other hand, the structuring cause shapes or modulates the action. For example, I may be motivated to drink some water (I’m thirsty), but I don’t do it because I have some blood tests in 5 minutes and I should not drink before the tests. The motivation state, the thirst, does not causally trigger the action of drinking, and after the test it only motivates me to go to look for water. It is the agent who initiates the action, and the structuring cause modulates it: in the example, the thirst motivates me to go get some water quickly. In the case of metacognitve feelings, I may have a feeling of error that may motivate me to check the calculation I just carried out, but because of hurry, stress, or someone next to me distracting me, I fail to check the calculation. In this case, the feeling of error may shape some of my actions: I’m hesitant to communicate the result, I’m slower in some respects (the slowing down effect we report in the paper), etc. So, feelings and motivational states are not triggering causes (because they may fail to trigger the action they motivate), but they are structuring causes that shape or modulate the way the agent produces her bodily or mental action.
It’d be great to know what you think about this distinction.
Cited Works
Dretske, F. (1988). Explaining Behavior. The MIT Press.
Hello, Santiago & Juan Pablo, thanks for the helpful replies.
I think the distinction between triggering and structuring causes is useful here, and I too would suggest a similar rewording of the claim, so that your causal role in guiding the remembering process by means of your metacognitive feelings is made explicit.
If we make your causal role as agent an explicit aspect of the account, it seems that we must also provide an account of what sort of entity you are. That is, if we say that you initiate the action and that structuring causes merely modulate what happens, so that your metacognitive feelings and motivational states are not triggering but structuring causes shaping the way you produce your bodily or mental action, then we must also say something about your metaphysical status as agent, and about how are playing this important causal role in producing your actions.
(I’ve tried to do this: http://www.tandfonline.com/doi/full/10.1080/00455091.2017.1285643)
I think that recognizing you, and your causal role, as agent is crucial to understanding mental action and the metaphysics of mind more broadly, so I’d be curious to know what you think about this.
Thanks again for the replies!
Cheers,
Michael
Dear Michael,
Thanks for this reply.
We agree that it’s crucial to include the agent to account of bodily or mental action. The agent should be included as initiating, sustaining, controlling and ending the bodily movement (in the bodily action) and/or mental process (in the case of mental action), as you claim in your paper (Brent, 2017). However, we do not have a theory of what exactly the agent is. This needs further thought. We would only like to say that we would not like to commit to a very demanding theory of agency that requires, for example, meta-representation, or full consciousness of every detail of the production of the action.
Thanks again!
Cited work
Brent, M. (2017). Agent causation as a solution to the problem of action,
Canadian Journal of Philosophy, DOI: 10.1080/00455091.2017.1285643