Last night on New Year's Eve I was hanging out at my older brother's house in Magnolia, where I met a friend of his who is working towards a Ph.D. in neuroscience. His particular field involves creating neuro-computational models of the brain, which I found very fascinating.
Though I've written a lot on this blog about eliminative materialism, which is an anti-reductionist view, a lot of what contemporary neuroscience does is quasi reductionist, in fact. The computational theory of mind is basically a modern version of functionalism, arguing that minds are fundamentally information-processing machines. The eliminativist view, however, is not incompatible with computationalism. A lot of mind theorists -- and I say mind theorists because it's not a formal pronouncement of science -- think that computationalism will end up being eliminativist. Eliminativism is in my opinion just a few theoretical steps ahead of the scientific pronouncements, though the critics say eliminativism is "premature" at this point. Neuroscience is nowhere near a complete model of the brain. Instead there is a kind of model pluralism going on, where various models explain different processes, with no overarching, philosophically-satisfying picture of what the mind really does.
But the real question that computational neuroscience is fascinated with is whether and when a physically-realizable computational model could match the human brain. That is to say, whether multiple realizability is possible. John Searle from Berkeley argued that multiple realizability was basically impossible in his essay, Minds, Brains and Programs. The Chinese Room is supposed to show that a machine could never understand the way humans do. There is supposed to be something incredibly unique and exceptional about the way the human brain "secretes" understanding, according to Searle. Without some essential ingredient, like human brain milk or what have you, understanding is not possible.
I think Searle's argument is near-sighted because he assumes a relatively low level of information processing. He also builds an internal "understanding process" into his model, and says that that process is really external to the model. And like most of my objections to theories like this, he uses a semiotic theory that attaches "understanding" outside the plane of signifiers., therefore disallowing any causal connection to take place.
But recall the argument Hans Moravec, the absent-minded genius of AI robotics, made in the essay When Will Computer Hardware Match the Human Brain? If the information processes in the brain are on such a high order of magnitude, it makes sense that the same level of information would be required to match the capacities of the human brain. Which, at this point, is not possible. Searle's argument is an unimaginative one, since it assumes (or rhetorically asks us to intuit) that a process can be simulated with less information than the original.
Searle relies on analog systems for his analogy, using phrases like "water pipes" and "valves". What Ai researches have in mind is not some clunky Frankenstein, but complex systems capable of high magnitude information processing and content management. Searle can only think of "syntax" as the closest approximation of understanding an Ai unit can achieve. Yet with an advanced matrix for assigning truth values to syntactical arrangements, with the possibility of confirming those values and associating them with other values, seems to be a better approximation of understanding than Searle allows.
These kinds of processes are certainly realizable. Moravec estimates that research within semiconductor companies makes it quite clear that existing techniques can reach processes that could potentially match the computational complexity of the human brain. When memory capacity reaches tens of billions of bits and multiprocessor chips reach over 100,000 MIPS, Moravec argues this is comparable to the human brain. Circuitry is also increasingly incorporating a growing number of quantum interference components. Hence, the development of the quantum computer. As production techniques for those tiny components are perfected, they will begin to take over the chips, and the pace of computer progress may steepen further.
Even though Searle would still say that no matter how much information goes into the system, it is not capable of understanding. It seems rather ridiculous to not allow the system any means of defining its variables through some sort of confirmation method other than the copy-method the system has already. Understanding is relational and associative; it's not something that happens when copying and pasting. And it seems highly likely that neuroscience will eventually be able to give complex enough models which can then be used in artificial systems to simulate the exact same processes that take place in the human brain.
Tuesday, January 01, 2008
When Will Computer Hardware Match the Human Brain?
Submitted by Acumensch at 1.1.08
Tag Cloud: Transhumanism
Subscribe to:
Post Comments (Atom)
4 comments:
I think that you're misconstruing the import of Searle's argument. It doesn't really matter if we produce a computer capable of conversing in Chinese in such a way that it is indistinguishible from a human being (or, for that matter, performing any cognitive task as human or superhuman level). What is at stake in the Chinese Room is whether or not such a system in principle is the same sort of thing as human cognition. If you think that human cognition is simply about symbol manipulation (which is what computational models of mind boils down to), then you have to admit either that the guy in the room 'understands' Chinese by manipulating symbols according to the rules he uses to produce human-level outputs, or that the room as a whole 'understands' Chinese. The first of these options isn't appealing because it doesn't seem that the guy in the room does understand Chinese: he understands English and through it produces indiscernible from native speaker responses. The second option is ridiculous, but that doesn't stop some people from proposing it.
Of course, the Chinese Room is an impossible scenario. But it doesn't matter that it would neve come about; it's meant to demonstrate something about the way human cognition works. Either you find it demonstrative or you don't. It has no bearing, in the end, on whether or not our clever computer scientists will cook up a computational program that can 'think' in a way that produces linguistic responses indiscernible from native speakers: that is an epistemological concern. It's concerned with problems of ontology rather than epistemology.
There are many ways to frame it. But Hans Moravec in his book "Robot" argues against Searle by saying that he does not consider, or fathom, the level of information processing going on. Which is why he thinks that the brain can be analogized by 1 billion button-pushers. Chalmers uses the economy of Bolivia. They're just bad examples to start with.
I think you're right when you said "indiscernible" because making these machines really comes down to a behavioristic criteria for understanding in the end. That's what it means for neuroscience and for Hans Moravec. That view is also one very popular critique of solipsism.
But analytically, think about the way we define words, and how symbols are rearranged basically to define other symbols. That's symbol manipulation, and yet when we define a variable we think we understand it. The Chinese Room doesn't allow for definitions, just rearrangement. Yet how can a sentence be understood without definitions? Well, Searle externalized that when he pinpoints the human in the room. The components of understanding are really broken up and then interrogated separately. How can you really have a convincing argument against functionalism when you've broken up all the functions?
I think this is why people respond to the Chinese room by saying that the person manipulating the symbols doesn't understand Chinese, but the room does: the definition of words is implicit in the way they are manipulated. Of course the person in the room doesn't understand what he is manipulating; we wouldn't say that the hippocampus understands English even though it probably plays a role in the ability of human beings understanding English.
I have other sympathies, really. Analytic philosophy of mind boggles the mind; there are other ways to get at solipsism (you can take a transcendental road, for instance).
But analytically, think about the way we define words, and how symbols are rearranged basically to define other symbols.
I dunno: is that how we define words though? I find a phenomenological account far more convincing. I mean, in endorsing one at least we know where our opacity lies (in the intentional ability of subjects). Phenom. is inherently 'concrete' and so not symbolic, and essentially qualitative, and so not computational. It's coming at the problem from the complete opposite side.
What's the phenomenological account? The way I was talking about definitions is actually the way Derrida talks about it. When we use words we're using signifiers, which ultimately 'refer' to other signifiers. There is no connection to the world because language doesn't hold the world. The world is arbitrary. The "referent" is always just another signifier... ad infinitum. Until eventually you accumulate enough signs that are basically saying "I'm not this; I'm much more like this" and soon you approach an understanding of where the sign falls in relation to other signs.
Searle seems to have a linguistic theory where the only the way the human could learn Chinese is by using it to describe the world. That seems arbitrary. The guy could learn Chinese by associating it with other signifiers, ex.g. the English ones.
Derrida gets his ideas from de Saussure, the French linguist.
Post a Comment