Monday, February 26, 2007

A Positive and Pragmatic Theory of Computation and Cognition

Early digital computers were frequently referred to as “electronic brains.” Able to perform complex mathematical calculations, play games and even write simple poetry, these machines clearly displayed characteristics previously thought to be the exclusive realm of human beings. Moreover, they did these feats with such rapidity that it seemed difficult not to imagine that with the verisimilitudinous progress of science our electronic brains would one day exceed our own rather limited cognitive abilities.

This paper argues that mind computationalism, properly understood, is an invaluable “research program” in our quest to understand cognition. The criteria we should select should say something to the effect that whatever yields the most successful predictive record will be our theory of mind. It will become obvious that all the alternative proposals are ultimately committed to the very same viewpoint. The point of view of the author is that brain processes are computational. To appreciate this, however, it is necessary to first develop a sketch of computation and of what a cognitive agent is and how it may function. This perspective will then provide the foundation upon which to sensibly discuss the meanings and relative merits of the various ideas. This approach is somewhat unusual, perhaps, but appropriate given the interrelated nature of the concepts being investigated. Ultimately, these notions simply cannot be understood in isolation, but only as a system, as a coherent philosophy of mind (indeed, a philosophy of everything.)

Computation is interpretable symbol manipulation. Symbols are objects that are manipulated on the basis of rules operating only on the symbols’ shapes , which are arbitrary in relation to what they can be interpreted as meaning. Even if one accepts the Church/Turing Thesis that computation is unique, universal and very near omnipotent, not everything is a computer, because not everything can be given a systematic interpretation; and certainly everything can’t be given every systematic interpretation.

Cognitive agents are usually described as small parts of the world in which they exist and are thus assumed to have limited abilities. Cognitive agents are agents that incorporate and use knowledge of the external world to improve their chances of survival. In order to cope with the vagaries of its world, an agent needs to select and execute the action most appropriate to its goals. A proper model of the mind is a (presumably) physical system that implements a computation capable of providing the necessary answers. The relation between cognition and computation is clear.

An agent’s model may be innate or it may be constructed (learnt) as a result of sensing and possibly interacting with the environment. It may be static or continuously refined, again as a result of interactions. Given such a model of the world, sensory input must somehow combine with it to determine actions relevant to the agent’s present situation and goal. Any discrepancy between the model's predictions and the subsequent sensory input will indicate errors in the model and can thus provide the basis for updating it.

Given the interpretations of computation and cognition outlined above, is computationalism, the view that cognition is a form of computation, correct? There are at least three ways to interpret this question, (1) Can cognition be described (simulated) by computations, (2) Is cognition literally computation, and (3) Does the notion of computation offer a suitable basis for understanding and explaining cognition.

Based on our analysis, the answer to the first form of the question, “Can cognition be described by computations?” would seem to be, “Yes.” Clearly, we can construct computational simulations of cognition at various levels; the question though, presumably, refers to description at the “lowest” physical-level (if there is any sense to this notion.) Assuming that the mind/brain has a purely physical basis (i.e. no part of it –the soul, perhaps—would continue to exist were its material components to be destroyed) then, since a program or computation is simply a description of a causal system, answering the question in the affirmative requires another physical system having equivalent causal dynamics that we can utilize as the model. This is an empirical problem.

The second form of the question, “Is cognition literally computation?” cannot be answered quite so easily. Computation is certainly part of cognition (specifically, the agent’s model of the environment.) But what of the other elements, the input and output pathways linking the model to the external world, the goals, the matching and decision-making mechanism, etc., are they also computational? It would seem that if they are physical/causal systems, then, presumably, they too can be interpreted computationally, in which case we should also accept that cognition is quite literally a matter of implementing the right form of computational system. John Searle uses intentionality to distinguish here the goals of computational devices from human mental processes. The computational device may be able to say the word “tree,” but it will never be about an actual tree. This is misleading, and Searle’s response to the “mansion reply” is likewise misleading, since it assumes a very uninventive posture regarding what emergent technologies are capable of. Further replies by Dennett regarding our intuitions should not even be considered since Dennett pumps intuitions in a positive way to describe other thought experiments.

The final interpretation, “Does the notion of computation fail to have explanatory value when it comes to understanding cognition?” is of more immediate concern to cognitive science and artificial intelligence researchers. The case against computational theories of the mind have been growing stronger with claims to the effect that computation lacks semantics, is disembodied, is insensitive to real-world timing constraints, is at the wrong level, and, most dramatically, that since every system can compute every function, it is just too pervasive to be meaningful.

Clearly, computation is important from a practical perspective and also, perhaps, from a historical one. The case against the computational view of mind is misguided. While every system can indeed be viewed as implementing some computation, every system simply cannot implement every computation. Moreover, the fact that computation lacks certain elements of mind, such as semantics, is not important, since our objective must be to explain how these features arise. If computation did possess them it certainly could not provide any basis for understanding them. Further, the notion of a computational model is clearly central to the cognitive process and, at least in the case of semantics, it would appear that we can actually develop explanations in these terms. AI researchers first suggested that symbols and mental states gained their “meaning” from other representations. Searle’s (1980) infamous Chinese Room Argument was the first nail in the coffin of this idea, in which a computer could run the steps of the program without understanding a word of Chinese. Since each word is defined in terms of other words, such that, unless someone provides the meanings for a few primitive words, there is no hope of understanding anything.

Given the analysis of cognition in terms of models, the solution is basically straightforward. A representation (state) has meaning for the agent just in case it has predictive value. On relevant occasions the symbol might be activated via causal connections with the external world, indicating that the particular feature it represents is present. On other occasions it may become active as a consequence of the execution of the model and thus constitute a prediction. It may not even have a real-world counterpart, but simply be part of a theory (model), which provides answers in the absence of anything better. It is not, of course, necessary that the predictions always be correct in order for the state to be counted as a meaningful representation. Neither is it necessary that the agent ever display behavior based on the representation.

An agent’s model of its world might be viewed as a formal system comprising symbols and inference rules. A number of questions thus arise, first, and foremost of which concerns the origin of these symbols and rules. Are they, perhaps, innate, or does the agent somehow select an appropriate set of symbols? Acquiring (and maintaining) a suitable set of base symbols for a given external world is likely to be one of the primary determinants of success or failure for an agent.

How then, might an agent “discover” the symbols it needs? An outline answer might go something like this. Agents have a number of sensors and “actuators” (1). The problem for any agent is to decide which actuator (if any) to invoke at any particular moment. Its objective is to satisfy its needs (food, sex, comfort, etc.) In some cases evolution may have endowed it with automatic (innate) mechanisms that restore it to its “ideal” state. In other situations, however, it will need to instigate “deliberate” actions in the hope of achieving these goals. On the (necessary) assumption that there is some regularity in the environment (and lacking any other prior knowledge), the best an agent can do is to store past sensory input patterns and then match the current situation against these in the hope that they might repeat. The matching process will thus produce a set of expectations, and assuming that the agent has also stored information about its past actions and their effects, it should then be able to compute the “intersection” between these, its perceived situation and its goals, and hence select the most appropriate action to take.

Given the variation in input patterns, the initial problem is to identify sets of sensor inputs that regularly occur together. Having isolated these initial sets, the agent can further group them into less frequently occurring sets, and so on. Gradually, it should also be able to determine combinations of these sets that are mutually exclusive of each other (by observing that they share terms, for example.) All of these groupings form the agent’s (internal) symbols. Another set of symbols (external ones) is formed when the agent acquires language. Meaning in these symbols involves an additional mapping from the external word itself to the representation of the corresponding concept, to espouse a commonsense “correspondence theory” about truth.

As for the inference rules, they must be logical—since the agent must make the correct, rational, “logical” choices. We can thus expect logical rules to be part of an agent’s makeup, i.e. in biological agents, evolution will have produced and favored mechanisms which behave as if they were performing logical inferences. Classical logic, being the result of abstraction from our spoken language, is evidence for this, although, of course, it does not account for all our observed reasoning. Certainly, human beings frequently fail to reason perfectly (perhaps due to biological limitations, lack of time, incorrect or incomplete knowledge, etc.), but the fact remains that an agent’s mechanism must be inherently logical.

So, assuming that they are both capable of supporting the necessary computational structures, the choice is an organizational one and cognitive agents could equally well employ either. Of course, there may be other reasons to prefer one form to the other. It may be that one is easier to implement in a particular technology; silicon or incorporated biological processes. Or that it requires less hardware or works more reliably. Building a cybernetic organism by using brain tissue where it is necessary, and wires or silicon where it necessary is an attractive prerogative for computationalist theorists.

Does computation, an abstract notion lacking semantics and real-world interaction, offer a suitable basis for explaining cognition? The answer would appear to be, “Yes,” indeed, it would seem to offer the only possible explanation.

The basic argument of this paper is as follows. Models enable us to make predictions. Constructing a model requires building a physical “device” whose states and dynamics map onto those of the target system. A convenient way to do this is to write a program that can be executed on a digital computer. The program, and the computation it defines, is thus an abstract specification of the desired causal system. To maximize their chances of success, cognitive agents need to make predictions about their external world. It therefore seems reasonable to assume that their architecture must include a model that can be used to make such predictions. This model can be described and interpreted in computational terms, so computationalism must offer an appropriate basis for explanation.

While behaviorists and dynamicists claim to offer alternative models, it is clear that these relate to organizational concerns and thus do not deflect the essential computational explanation, for they too are computations. The argument put forward by roboticists, psychologists and social theorists, that intelligence or representation demands situated interaction, would appear to be essentially correct on the analysis presented here. A state is representational only on the basis of its predictive value to the agent. From the computational viewpoint this is perfectly natural and answers the question of semantics. Finally, the philosophical argument, which claims to show that computation is a potentially vacuous concept, was seen to be misleading. Mapping every computation to every system is simply not possible because the proper causal structure is lacking. Computation is about prediction and while it is possible to map any specific computational sequence onto (almost) any physical system, there is little predictive value in doing this. Computationalism, in the end, stands as a viable theory and many of its attractive features such as multiple realizability and practicality appeal to AI researchers and cognitive scientists.


Chalmers, D.J. (1995). Absent qualia, fading qualia, dancing qualia. In (T. Metzinger, ed) Conscious Experience. Ferdinand Schoningh.

Chalmers, D.J. (1996a). Does a rock implement every finite-state automaton? Synthese.

Chalmers, D.J. (1996b). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.

Harnad, S. 1989. Minds, machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence 1:5-25.

Kim, J. 2006. Philosophy of Mind 2e. The “Chinese Room” Experiment. Westview Press. P. 145

Searle, J.R. 1990. Is the brain a digital computer? Proceedings and Addresses of the American Philosophical Association 64:21-37.

Searle, J.R. 1991. Minds, Brains, and Programs? The Nature of Mind. Edited by David Rosenthal. P. 509

No comments: