I think John Searle might be my favorite living philosopher. But when I tell my friends this, they recoil in horror. “That bastard?” one friend cries. “He tripled my rent!” “Oh geebus,” another cries. “The Chinese Room argument is awful.”
I will not profess to have an opinion on whether the fourteenth amendment requires the City of Berkeley to provide a rational basis for only allowing its landlords to raise rents at forty percent of the increase in the consumer price index, but I must admit I fail to see what the question has to do with Searle as a philosopher.
Nonetheless, I will defend the Chinese Room Argument. The basic idea, for those who aren’t familiar with it, is this: imagine yourself being placed in a room and given instructions on how to convert one set of Chinese symbols to another. To outsiders, if the instructions are good enough, it will seem as if you understand Chinese. But you do not consciously understand Chinese — you are simply following instructions. Thus, no computer can ever consciously understand Chinese, because no computer does more than what you’re doing — it’s simply following a set of instructions. (Indeed, being unconscious, it’s doing far less.)
The Chinese Room Argument works mainly as a forcing maneuver. There are only two ways out of it: you can either claim that no one is conscious or that everything is conscious. If you claim that no one is conscious, then there is no problem. Sure, the man doesn’t consciously understand Chinese, but he doesn’t consciously understand English either. However, I don’t think anyone can take this position with a straight face. (Even Daniel Dennett is embarrassed to admit it in public.)
The alternative is to say that while perhaps the man doesn’t consciously understand Chinese, the room does. (This is functionalism.) I think it’s pretty patently absurd, but Searle provides a convincing refutation. Functionalists argue that information processes lead to consciousness. Running a certain computer program, whether on a PC or by a man with a book or by beer cups and ping pong balls, will cause that program to be conscious. Searle points out that this is impossible; information processes can’t cause consciousness because they’re not brute facts. We (conscious humans) look at something and decide to interpret it as an information process; but such processes don’t exist in the world and thus can’t have causal powers.
Despite the obvious weakness of the arguments, why do so many of my friends continue to believe in functionalism? The first thing to notice is that most of my friends are computer programmers. There’s something about computer programming that gets you thinking that the brain is nothing more than a special kind of program.
But once you do that, you’re stuck. Because one property of computer programs is that they can run on any sort of hardware. The same program can run on your Mac or a PC or a series of gears and pulleys. Which means it must be the program that’s important; the hardware can’t be relevant. Which is patently absurd.
I used to think that part of the reason my friends believed this was because they had no good alternatives. But I’ve since explained to them Searle’s alternative — consciousness is a natural phenomenon which developed through evolutionary processes and is caused by the actions of the brain in the same way solidity is caused by the actions of atoms — and it hasn’t caused them to abandon their position one bit.
So I tried a different tack. I asked them what they thought was wrong with Searle’s position. And the answer always seems to come down to a confusion between ontology and epistemology. Ontology is a fancy word for the facts of the matter — what actually exists out in the world. And epistemology is the world for the way we know about it. Unless you subscribe to a bizarre philosophical theory, things in the world exist irrespective of whether we know them or not. Behind the TV game show door, there either is a car or there isn’t, even if no one can see in to tell which one is the case. Furthermore, things continue to exist even if we can’t even know them in principle. There appears to be no way for me to ever tell what it feels like for you to taste an orange; nonetheless, there is indeed something that it feels like for you.
My programmer friends’ argument always ends up coming down to this: if a computer program acted conscious, if it plaintively insisted that it was conscious, if it acted in all respects like the conscious people we know in the real world, then it must be conscious. How could we possibly tell if it was not? In short, they believe in the Turing Test as a test for consciousness — anything that acts smart enough to make us think it’s conscious must be conscious.
This was the position Ned Block was trying to refute when he postulated a computer program known as Blockhead. Blockhead is a very simple (although very large) computer program. It simply contains a list of all possible thirty minute conversations. When you say something, Blockhead looks it up in the list, and says whatever the list says it’s supposed to say next. (Obviously such a list would be unreasonably long in practice, perhaps even when heavily compressed, but let us play along theoretically.)
Having a conversation with Blockhead would be just like having a conversation with a real person. But nobody could seriously claim the program was conscious, right? Well, in fact they do.
One wonders whether these people think their cell phones are conscious. After all, talking to a properly-enabled cell phone works just like talking to a properly-enabled person! (I asked one friend this and his response was that the whole system containing the cell phone and the wires and the person on the other end was conscious.)
The point is that we don’t assign consciousness purely based on behavior. Blockhead acts like it’s conscious and a completely paralyzed person acts like they’re not, yet we all know that the first isn’t conscious and the second is. Instead, we assign consciousness based on causes. We know dogs are conscious because we know they have brains that are very much like ours which cause behavior very much like ours. We don’t make that judgment based on behavior alone.
Criticisms aside, what is the positive argument for John Searle? First, he has done important work in a wide variety of fields. As far as I can tell, he began following up the works of his teachers (like J. L. Austin) on the topic of speech acts, which he generalized to the subject of intentionality, which he solved by saying it was a property of conscious beings, which led him to develop a theory of consciousness. Second, all of his points seem quite reasonable to me and (with a few exceptions) I agree with them. Third, he writes extremely clearly and entertainingly and for a popular audience.
These three seem like a fairly low bar — they’re about what I would expect from myself were I a philosopher — but its shocking how few prominent philosophers seem to meet them. Daniel Dennett is a dreadfully prolix writer and is insane. Thomas Nagel comes close but is a fairly committed dualist. Hilary Putnam doesn’t write for a popular audience. Peter Singer doesn’t seem to develop any actual theories. So I can’t think of any. Can you? Suggestions appreciated in the comments.
What about the Norwegian philosopher Arne Næss? He also writes for a popular audience and has coined the ideas such as “deep ecology” and “ecological wisdom”…
posted by Albert Francis
on March 14, 2007 #
“There are only two ways out of it: you can either claim that no one is conscious or that everything is conscious”
Third way: consciousness is a VARIABLE, not a BINARY.
That is, “everything is conscious” is like “everything has a temparature” - there’s a range from 0 to a very high level. We’re predisposed to read “everything is conscious” like “everything is hot”, which sounds absurd, but that’s just because of language usage.
I would say a lot of trouble is hidden in the words “given instructions on how to convert one set of Chinese symbols to another”. It may be that this turns out to mean something like “a conscious mind, or access to one in some manner” in practice. That is, it’s not really an argument, but a tautology, basically another artifact of non-rigorous language (i.e, circularly arguing that it’s a mere set of instructions, and minds are taken to be much more glorious than a set of instructions, so if a set of instructions can do what a mind does, a mind must be something else).
posted by Seth Finkelstein
on March 14, 2007 #
I think part of the problem of liking Searle the philosopher is dealing with the arrogant, blowhard you will encounter if you ever have the opportunity to take one of his classes.
posted by talboito
on March 14, 2007 #
Have you read anything by David Chalmers?
posted by Mike Bruce
on March 14, 2007 #
One of my own favorite contemporary philosophers, Susan Haack, certainly meets your three criteria of importance, reasonableness, and writing clearly for a popular audience:
http://www.as.miami.edu/phi/haack/SummaryBio.htm
posted by Kermit Snelson
on March 14, 2007 #
“The point is that we don’t assign consciousness purely based on effects. Blockhead acts like it’s conscious and a completely paralyzed person acts like they’re not, yet we all know that the first isn’t conscious and the second is.”
Actually, we do assign consciousness purely based on effects. The only things we perceive of the world are through our senses, which are effects in the physical world. There’s no other way for us to assign consciousness.
“Yet we all know” - what kind of argument is that? No, we don’t “all know” - that’s incredibly lazy thinking. You have to back up your assertions with some logic.
“Searle points out that this is impossible; information processes can’t cause consciousness because they’re not brute facts.”
What does this mean? A lot of philosophers don’t understand computers, but computers are brute facts - you can touch them - and also information processes, because they use the physical laws of the universe to perform computations.
“Because one property of computer programs is that they can run on any sort of hardware. The same program can run on your Mac or a PC or a series of gears and pulleys. Which means it must be the program that’s important; the hardware can’t be relevant. Which is patently absurd.”
Why is it patently absurd? You need to get away from these lazy assertions, and actually provide some reasons, and, you know, a logical argument. As you know, all machines which are Turing-equivalent can model any computation (see your computer science education for the proof), and the physical laws of the universe can be used to model a Turing-equivalent machine (see any desktop computer as an evidence-based fact), therefore in a very real way, the hardware is not relevant. It couldn’t really be more clear to me!
FWIW, I view the universe as a computation, and a long time ago - about the age of 13 or so - I came around to the conclusion that any computation in the universe is indistinguishable from free will / consciousness. If you replaced every one of the neurons in your head with a miniature device that acted like a neuron, then there’d be no effective difference in your behaviour, yet now your brain would be formed out of components we could model in a computer - down to the low-level physical laws, if necessary. It seems to me, that following this line of reasoning, all that is required to simulate a conscious being inside a computer (and by analogy the Chinese Room), is a thorough understanding of (1) physics and (2) the physical structure of neurons and their interconnections.
Would the result be conscious? Well, guess what - I don’t know that you’re conscious. Nobody knows anything about anybody else’s consciousness. We just have to believe what they say when they say they’re conscious, so instead, it comes down to intelligence. And the only criteria we have for deciding intelligence - in spite of your objections, which you haven’t backed up - are the senses we receive, i.e. the input into the brain, which is formed completely from the output of the simulation. I view the Turing test as a thought experiment that demonstrates that we evaluate intelligence only through outward behaviour - it’s the only way we can evaluate anything - and therefore intelligence can be simulated.
posted by Barry Kelly
on March 14, 2007 #
Just one further point, after remembering a few things. Many philosophers theorizing about consciousness still still use homunculus-based approaches, e.g. see:
http://en.wikipedia.org/wiki/Homunculus#The_homunculus_argument_or_fallacy_in_the_philosophy_of_mind
Many philosophers have tried to use infinite regression of homunculi as an argument that there’s something special, or magical, about the human mind, without realizing that the physical laws of the universe can themselves form a very simple homunculus, terminating the infinite regression - thus many “disproofs” of theories based on infinite regression are actually incorrect.
The Chinese Room argument is similar: we see inside the box (we’re given special knowledge of how the “box”, aka the brain, works) to see the literal homunculus, and therefore it’s “obvious”, or “patently clear” that it (the whole system) is not conscious.
Problem is, would we make the same claim if we knew exactly what was inside the human skull-box? If not, why not? Is it any different?
To me, the human skull-box and the Chinese Room are perfectly analogous. The homunculus is literal in the Chinese Room, but it’s the physical laws of the universe in a brain or a computer. The instructions and book in the Chinese Room are the physical state and structure of the brain’s neurons. It seems to me that if you support the thesis that the Chinese Room is not conscious, then you believe that the human mind is not conscious.
posted by Barry Kelly
on March 14, 2007 #
Jerry Fodor. The scope of his work is not as extensive as Searle’s, but it’s definitely important (he provides the best arguments for the so-called cognitivist approach) and has contributed very original ideas about the structure of the mind. It is a pleasure to read him - you could read any of his works without any previous knowledge of the matter and still benefit from them.
posted by Tom Berger
on March 14, 2007 #
Seth writes:
Third way: consciousness is a VARIABLE, not a BINARY.
It’s both, like one of those dimmer light switches that clicks when going to zero. Saying that thermostats have some level of consciousness (as Chalmers seems to do) just strikes me as crazy and that’s not a trick of language.
I would say a lot of trouble is hidden in the words “given instructions on how to convert one set of Chinese symbols to another”. It may be that this turns out to mean something like “a conscious mind, or access to one in some manner” in practice.
Huh? We’re assuming the theoretical possibility that building a computer program to solve the Turing Test is possible. Then we’re writing the program down in human language as instructions in a book. The book is not a conscious mind. This may turn out not to actually be possible in reality, but that’s why it’s a thought experiment. This is also not an artifact of language.
talboito: Yes, Searle does come off as a bit arrogant even in writing. But I’ve learned to tolerate some level of arrogance.
Barry caught a typo which I’ve now fixed (I had “effects” instead of “behavior”. His other arguments don’t seem particularly convincing to me — he falls for exactly the trap I state: he confuses epistemology with ontology.
I should clarify that this piece is not meant to be a rigorous philsophical argument, just a reflection on recent events. The reason I’ve been talking to people about this stuff is that I’ve been working on the more rigorous piece.
Barry: The difference between the Chinese room and the brain is that brains cause consciousness.
posted by Aaron Swartz
on March 14, 2007 #
I think you are unfair to poor old Mr. Dennett. His usual argument against this kind of thing holds up well to the Blockhead case, for example:
When you say “Obviously such a list would be unreasonably long in practice, perhaps even when heavily compressed, but let us play along theoretically.” you are asking us to play along with something unrealistic - and that’s the problem. How long is “unreasonably long” - well, think how big a list of all 30-minute conversations are. If each word could be followed by one of 10 others, and speech happens at 1 word per second, then that’s 10^(180) conversations. So it is reasonable to say “No - I will not play along, even theoretically”.
posted by tom s.
on March 14, 2007 #
I meant to finish with “end of conversation - intelligent or otherwise”
posted by tom s.
on March 14, 2007 #
What is it about functionalism that allows it to resist thought experiments? Functionalists certainly like to have their zombies.
posted by Aaron Swartz
on March 14, 2007 #
What is your definition of consciousness?
You are arguing what it means to be conscious but haven’t defined what consciousness is.
posted by Sean Abrahams
on March 14, 2007 #
Which means it must be the program that’s important; the
hardware can’t be relevant. Which is patently absurd.
Why is it patently absurd?
… things in the world exist irrespective of whether we know
them or not.
At the macroscopic scale, yes. Once you get into quantum mechanics all bets go out the window (cats in boxes and such).
The difference between the Chinese room and the brain is that
brains cause consciousness.
Are we sure that the brain isn’t just a Chinese box? Do we understand enought about it to say this?
posted by David Magda
on March 14, 2007 #
Boy howdy, do I wish I could talk to you about this in person. This question has been irritating me since I took my first cog sci course, and I’ve never spoken about it to anyone who’d even heard of this functional argument against the Chinese room. It might have been addressed in that very class, I just wasn’t much of a student.
posted by Ben Donley
on March 14, 2007 #
We know dogs are conscious
bowwow?
posted by Really?
on March 14, 2007 #
I don’t go in much because I think you are only playing with ‘American tradition’ context. But the word ‘consciousness’ - and epistemology or ontology - these aren’t used very well in this piece.
Let’s say Babbage probably didn’t think of Turing test (He didn’t. Wicked enough!). But maybe kids in the bedroom given teddy bears to play with (19th century?) gave souls and everything they could give - to their teddy bears.
Then Turing’s test came. (and yeah Strong AI school, bit later, with some very particular kid of information theory thing) And then ripples in the pond.
But then you got Johns Hopkins’s Ben Carson’s one argument - or suggestion about the topic of the argument, why we let other humans to waste what their brains potentially got.
Say we all got good enough hardwares, but we don’t want to install softwares, and we tolerate hardwares being wasted with bad, poorly written softwares. (Go Tenderloin or Hunters point or Richmond in east bay or East Oakland, say. You see potential Mac Tower Pro or supercomputers playing - or stuck with - 30 years old Atari games - or even something less than that.)
He did that argument in some places - in talk to public, and seems it didn’t fly.
So we are going back to teddy bears again. Such a cool argument. I can’t wait to see your real piece.
posted by IT kids are concerned about CPUs huh?
on March 14, 2007 #
It’s an old book so there may now be better but, ‘The Mind’s I’ goes through all of this (as does ‘Permutation City’, mind-blowingly, by Greg Egan). I don’t claim that the following is Hofstadter’s answer though.
I think it’s possible that a human could animate a stop-motion consciousness, with symbols etc. And that, I wouldn’t have a problem with accepting as conscious.
What you’ve put as Searle’s big comeback about causality I don’t understand. I certainly don’t think that processing information makes machines conscious. I don’t know what you or Searle are arguing, so maybe I’m on your side and don’t realize it but, I distrust any argument that takes concepts that are pre-experience or understanding of what consciousness is, and tries to run with them. Minsky sums it up for me:
“It is too easy to say things like, “Computers can’t do (xxx), because they have no feelings, or thoughts”. But here’s a way to turn such sayings into foolishness. Change them to read like this. “Computer can’t do (xxx), because all they can do is execute incredibly intricate processes, perhaps millions at a time”. Now, such objections seem less convincing — yet all we did was face one simple, complicated fact: we really don’t yet know what the limits of computers are. Now let’s face the other simple fact: OUR NOTIONS OF THE human MIND ARE JUST AS PRIMITIVE.” (my emphasis)
posted by Mind's I Reader
on March 14, 2007 #
I haven’t read the above comments. I suppose I should. What if Blockhead had no record of conversations, only a set of rules hardwired in, and a large but lossy memory, and a clock that wasn’t quite perfect. He, let’s call Blockhead he, I think there’s a Peanuts character named Blockhead, or at least on of them calls another Blockhead. Anyway…
Blockhead, over time, receives audio input and one of his hard-wired rules is to drive a speaker at varying frequencies. Another rule is to keep track of how many times a particular driven pattern through the speaker is followed by another pattern. And record all of this, again, in a lossy sort of way. Loosing some random fraction of what’s been collected, at a rate that falls off with time. That is, there’s always a lot coming in, so Blockhead is always shedding a lot of recent data, but there is a pile of old stuff that slowly builds up. And even some of that is lost, but the rate functions are balanced so there’s a slow accumulation. Another rule is to pattern-match. Pattern-match all sorts of things, and assign those patterns over time, the ones that happen so frequently they aren’t lost, assign random names to those patterns, and put those random names at specific addresses. And even those addresses will be lost if they aren’t reinforced. But another rule Blockhead follows is to repeat driving patterns on the speaker if a pattern of response is detected. Add in a rule to favor own survival over survival of another.
Over, and over, and over. All the time. He’ll develop grammar, syntax, style. A sense of time. A sense of rules for conversation. Confabulating many possible responses, he will have access to rules, tried over time, that allow him to choose some confabulations that are better than others, and drive the speaker with those.
Maybe throw in some additional shortcuts. Add a video feed. Favor visual patterns with a lot of green on the bottom, and blue on top. Favor visual patterns that look like this :) only rotated 90° clockwise. Add in the ability to distinguish chemical smells and similar lossy association patterns can be ruled between smells and :) patterns and green/blue patterns, and certain audio patterns.
Make Blockhead reproducible, with a lossy pattern of the program. Make a pattern such that Blockheads can’t tolerate another Blockhead who’s responses vary too far from the patterns that he recognized in the the first few years of life. Make Blockhead want to destroy Blockheads that don’t match the pattern. You’ll have Blockheads that more and more efficiently learn to recognize patterns, and you’ll have clusters of Blockheads. Predictably, Blockheads near the borders of their clusters will at once destroy each other more often, and end up with new patterns and an urge to integrate the groups. I’ll bet they even naturally select code patterns that the original designer never predicted, more closely aligned to survival in their environment.
Don’t give me the usual crap about random numbers not being truly random. Schrodinger’s equations assure us that those tiny little atoms that “cause” solids (please, “causation” hasn’t a thing to do with it. Have you considered that it just is? Maybe the math just fell out that way in this particular instance of a universe?) will eventually induce measurable randomness into the clock. Even NIST’s clocks aren’t perfect. Not perfect.
How much. odds. These things are fundamental.
Random chance and self-repeating patterns condensing out of a huge chaos over vast amounts of time. Causation at the quantum level and causation at the level of human affairs are two things that follow different rules. Not because there’s a definitional difference in causation, but because there are so many how many, how much questions of probability in between.
Of course the biological Blockhead is self-aware by every possible test and entirely able to transcribe meaningless symbols. Pattern-matching and lossy memory and a fuzzy clock over time lead to this mental model.
The academies of philosophers and economists suffer from the same failure of mind: we can confabulate any situation there ever could be, but it only gets interesting when we start trying to find out “how much”. Pursuing theory without observation is tantamount to murder. So is observation without doubt.
posted by Niels Olson
on March 15, 2007 #
“Saying that thermostats have some level of consciousness (as Chalmers seems to do) just strikes me as crazy and that’s not a trick of language.”
It’s like saying “an ice cube has some level of heat”. So you react “That’s crazy! An ice cube is cold, not hot!”. Would it sound crazy to say (a similar but not identical idea) “thermostats have some level of reaction to their environment”? It’s obviously talking about a trivially small amount of a quantity we think of, linguistically, in terms of a high level of the quantity. It would help to distinguish between “temperature” (the quantity) and “hot” (high level the quantity). The trick of language is that “consciousness” is used in both senses (like the word “heat”).
“Huh? We’re assuming the theoretical possibility that building a computer program to solve the Turing Test is possible.”
That’s right. ASSUMING! The point is that assumption itself is arguably circular or tautological. The artifact of language is that it allows you to hide that circularity or tautology from inspection. It’s like Maxwell’s Demon: “assume we could distinguish between hot and cold (high and low energy) particles, then … We are assuming that … In theory …” The problem here is that it turns out that to do the operation of “distinguish” is dubious, in terms of possibly taking more energy itself. Similarly, the glib phrase of “writing the program down in human language as instructions in a book” may not be possible in way that doesn’t turn out to be, in practice, “create an artificial intelligence”.
posted by Seth Finkelstein
on March 15, 2007 #
This piece was way below par for you Aaron. Most of the things that you said were “obviously” the case are not at all obvious. It is like a Christian saying well “obviously” God exists - obvious to a Christian yes, but not to an Athiest. You just made a load of unsubstantiated statements of faith.
For what it’s worth, all of Dennet’s points seem quite reasonable to me, but I wouldn’t go so far as to say that they are “obviously” true, because they clearly aren’t from your perspective. I admit I haven’t read widely on the subject (Penrose and Hofstadter are my other two main jumping off points) but I have had quite a few more years than you to mull it over.
It seems to me that you are simply defining consciousness as something that occurs in brains. If we accept your definition the clearly it can’t occur in computers. That is no better or worse than defining conciousness as whatever property is shared between all potential systems that could pass a Turing test, it is just a different definition. Whatever definition of consciousness we might come up with, clearly it should apply to human beings (both the above definitions do). Beyond that, who can say whether an ant is concious whereas a hypothetical computer program that passes the Turing test is not? Or the other way round? It entirely depends what you mean by the word.
In fact, some people say that human beings are not conscious in general and that consciousness is only achieved in certain enlightened beings after years of meditation etc. Or that there is only a single Universal consciousness etc.
Answers are meaningless unless you know what the question is and I don’t think we do. It is like asking for the answer to “Life,
the Universe and Everything.”
I expect better from you and look forward to a future post on the subject in which you actually have something substantive to say.
posted by Ian Gregory
on March 15, 2007 #
Well this is a fascinating topic that I’ve been thinking a lot about lately. I have not studied it much yet, so I may be ignorant of one thing or another. But I have to say that Aaron’s argument is almost completely unsatisfying. I’m not even sure what his point is. This is a little bit of a ramble; my apologies.
We (conscious humans) look at something and decide to interpret it as an information process; but such processes don’t exist in the world and thus can’t have causal powers.
I honestly don’t even know what that means. It’s hardly the knock-down argument I was waiting for. Reminds me of the kind of thing Thomas Aquinas might say — my apologies for such a shoddy kind of reply; it’s just I don’t understand the statement well enough to make a rational reply to it.
We know that consciousness, whatever it is, exists, because we experience it. But we don’t yet know what causes it. We’re ruling out appeals to phenomenon that doesn’t come down to physics.
Saying that humans do not consciously understand English is false more or less by definition — what else can we mean by consciousness?
There are only two ways out of it: you can either claim that no one is conscious or that everything is conscious
How does this follow? I can’t say that certain kinds of arrangements of atoms, whose defining characteristics are as yet unknown, are conscious, and others aren’t?
Obviously consciousness is created by the brain (at least, that’s what the behavior of other apparently conscious systems would indicate, when their brains are destroyed). The only conceivable explanation, to me, is that consciousness is created by some process of computation, because there is nothing else there, that I know of, in our brains.
I have to agree with the guy who quoted Minsky. Something in the way our brain’s atoms are configured creates our experience, which we can only guess is something different from the experience of an insect or an ordinary computer program. If a chess playing program has state that registers that it is losing, is that the same as experiencing pain for a simple animal? For a human? If not, what is the difference? State that registers the state?
Basically I don’t see the relation between these questions, which seem the pertinent ones to me, and what Aaron is talking about.
Thus, no computer can ever consciously understand Chinese, because no computer does more than what you’re doing — it’s simply following a set of instructions.
The hardware is just following instructions. How do we know that computer hardware, plus a large pile of state, plus some transition rules, is not conscious? Isn’t that pretty much what our brains are, as far as we know?
consciousness is a natural phenomenon which … is caused by the actions of the brain in the same way solidity is caused by the actions of atoms
How is this different from being caused by computation?
Here’s awaiting your real post — in the mean time, unsubstantiated rambles are what you get back, from me at least :)
posted by David McCabe
on March 15, 2007 #
Searle’s Chinese Room experiment is a trivial misdirection. He focuses on the man in the room matching symbols rather than the creator of the semantic and syntactic translation rules. That designer was conscious. The man in the room is working unconsciously. When I speak my mouth and vocal cords do the translation from nerve impulses to sound patterns but it is entirely unconscious. You have to follow the trail back into the brain where you get lost because consciousness is an emergent property of the neural networks, not a property of the machinery at all.
posted by James Vornov
on March 15, 2007 #
What’s fascinating about the Chinese Room is that it’s a great litmus test. People on both sides of the issue are absolutely clear on their position and think the people on the other side are being deliberately obtuse.
My own take on it: I admit I’m deeply confused by why the information processes in my head would lead to my real subjective experience, but since I doubt that neurons have any essential properties that transistors lack, I have to conclude that software, in neuron or transistor form, can somehow lead to consciousness. If there’s some other process beyond that, adding qualia on top of the information processing, then I think that would have to be a property of the universe itself, not neurons as such.
What I’ve always wanted to ask Searle is this: what exactly is it that you think neurons are doing, physically, that transistors can’t do? If I invented transistors that added that capability, could I build a consciousness out of them?
posted by Chris
on March 15, 2007 #
Seth writes:
It’s like saying “an ice cube has some level of heat”. So you react “That’s crazy! An ice cube is cold, not hot!”. Would it sound crazy to say (a similar but not identical idea) “thermostats have some level of reaction to their environment”?
Consciousness isn’t defined as level of reaction to the environment. Consciousness is defined as subjective first-person experience. I don’t think thermostats have subjective first-person experience. I do not think there is anything that it is like to be a thermostat.
“Huh? We’re assuming the theoretical possibility that building a computer program to solve the Turing Test is possible.”
That’s right. ASSUMING! The point is that assumption itself is arguably circular or tautological.
I don’t see the circularity. The argument is:
- Assume you built a program that passed the Turing Test.
- Then you could write it down in a book
- Then you could ask a man to follow it.
- Then the man could act as if he spoke Chinese.
- But we know he doesn’t.
- So the Turing Test is not a valid test of consciousness.
posted by Aaron Swartz
on March 15, 2007 #
You got more comment from this one than any previous!!
Try one of Bucky’s faves: Korzybski. Science and Sanity was online, dunno if still there.
Love.
posted by William Loughborough
on March 15, 2007 #
“Consciousness isn’t defined as level of reaction to the environment. Consciousness is defined as subjective first-person experience.”
Then that’s using the “large amount” sense of the word. In that sense of the word - again, IN THAT SENSE OF THE WORD - I don’t think anyone would be claiming thermostats have consciousness. They might say that using a different sense of the word, one akin (though not identical) to “reaction to the environment”.
I don’t see the circularity. The argument is:
- Assume you built a program that passed the Turing Test.
- Then you could write it down in a book
2 does not necessarily follow from #1, in a very deep reasons. A program which could pass the Turing Test might be too complex to “write it down in a book” in any practical sense. You can’t even “write it down in a book” in practice all the software that runs a modern computer operating system - that is, the source code is just too huge. Sure, you can start playing games, and say it’s a big, big, book with tiny, tiny, fonts, but then that is just saying it’s information.
- Then you could ask a man to follow it.
C’mon - you’ve debugged programs. You KNOW how complicated this is in practice.
What does “follow it” mean? It just may mean “Create a process of such complexity that it’s a consciousnesses”.
- Then the man could act as if he spoke Chinese.
And this is where we start getting circular. All 1-3 basically say, is that if consciousness is an information process, then following an information process acts like consciousness. That’s circular. What 1-3 tries to do is say information processes are trivial things, by using trivializing language by assumption - i.e. “write it down in a book” IMPLIES, emotionally, “trivial”. That’s where the linguistic artifacts come in.
- But we know he doesn’t.
He doesn’t. The program does. Another way of phrasing 1-4 is “Assume we could build an artificial intelligence which passed the Turing Test in Chinese. Then a man could run that AI program and ask it something in Chinese, and it would answer in Chinese.”. Which is rather tautological.
- So the Turing Test is not a valid test of consciousness.
Only because you hid an assumption of that in the first place, by implicitly describing the AI above as not consciousness.
posted by Seth Finkelstein
on March 15, 2007 #
Or in a more straightforward manner than Seth.
The Chinese Room assumes that the process of translating a language is an enumerable process, and hence describable in software on a Von-Neuman machine. Of course it isn’t, so it can’t be.
The reality of both human language comprehension and translation of such is far more complicated, and consciousness, which is a requirement for doing either or both activities in a recognizably human fashion, will therefore not be expressible in a Von-Neuman machine architecture.
Consciousness is not a counting problem, and can not be reduced to a counting problem. It is some other beast.
You can take an aspect of the “thinking process” and reduce it to a counting problem, but you can not combine some number, or even an infinite number, of counting problems to create human recognizable thinking.
posted by smacfarl
on March 15, 2007 #
smacfarl:
Consciousness is not a counting problem, and can not be reduced to a counting problem.
You can use a ‘counting’, Von Neumann machine to represent floating point numbers to any precision you like, and use those to simulate a brain complete with juices in the grey matter and the firing of neurons (go down to the molecules if you like). Of course no actual ‘man in a room’ could give you an answer in his lifetime but it’s just a thought experiment. If this system represents a Chinese speaker’s brain and it’s a faithful reproduction I see no reason why it wouldn’t count as a consciousness. Don’t be mislead by the fact that its ‘life’ would run very slowly; it won’t notice.
posted by Mind's I Reader
on March 15, 2007 #
As far as I can tell, Aaron is saying that consciousness is a physical thing that happens in brains. Our experience of consciousness is a physical characteristic of what working brains do. Something like that. In which case, a computer program might be able to perfectly model it, but it would not actually be the thing itself. A computer could perfectly model a rock, and tell you everything about the rock’s behavior, but that internal modelling would not be a rock.
Of course, it is possible that consciousness is a physical thing.
IMHO, However, whether it is true is not shown by the example of the Chinese room. It is also possible that the invisible dragon in my garage is a physical thing. Since we don’t as have any way of detecting the difference between this type of consciousness and a simulated consciousness, it’s a lot like saying that consciousness is an invisible, undetectable, dragon in your head.
So, hopefully, Aaron can supply us with some reason that consciousness is a physical thing. Sorry if I’m acting obtuse.
Also, the bulk of Searle’s description of consciousness as a “natural phenomenon natural phenomenon which developed through evolutionary processes and is caused by the actions of the brain” does not conflict at all with the functionalist’s Information Process idea of consciousness. The only part that conflicts is the idea that it is a physical thing.
Prove that, and you won’t have to deal with this argument anymore.
posted by Ben Donley
on March 15, 2007 #
Aaron,
the neuroscience researchers, neurologists and psychiatrists define consciousness as level of response to the environment. Subjective self-awareness is a definition that enables circular arguments about consciousness. Its navel-gazing. If you want some mind-breaking logic, go do quantum physics or take a higher math class taught in the method of R. L. Moore. I’m sure somebody at Stanford or MIT is teaching one.
The Summer of Code thing sounds fun and helpful,
Niels
posted by Niels
on March 16, 2007 #
The blockhead argument seems very misleading to me. Some reasons:
You can’t build it in our universe, or in any universe remotely like ours. (Even if the universe was big enough to contain the information required for blockhead, it wouldn’t be able to access it quickly enough to carry on the conversation without transmitting information faster than the speed of light.) This isn’t a pedantic quibble - thought experiments are only useful when their premise can’t be ruled out entirely. You’re claiming that the turing test is bad because we can build this obviously unconscious system that could pass it - but we can’t build it - not even close.
Even if it could exist, you immediately run into a problem with your definition of “all possible” conversations. Which conversations are possible and which aren’t? Are you restricting it to conversations that would not be implausible to have with a mentally sound person? Or does it include all conversations that are grammatic, even if they are completely non-linear? If you choose the latter, it probably wouldn’t even seem conscious, if you choose the former, it is hard to see how you could build something like blockhead without creating an AI to build blockhead. And that means that blockhead is just an expansion of all the possible ways that the state of that AI could evolve over the course of 30 minutes. And here is where the slight-of-hand comes in: You know that people won’t intuitively agree that the AI can’t be conscious, since it is after all an AI. So you try to trick them into calling an extremely inefficient representation of that AI unconscious, because it’s easy to say “a big list isn’t conscious” as long as you don’t stop and think about how big “big” is here.
Ignoring the above point, a list of all possible conversations is not sufficient. Conversations involve choices made by both parties - for example if I ask blockhead what its favorite color is, it can say “red” or “blue”. What part of blockhead makes that choice?
posted by
on March 16, 2007 #
I personally recommend Richard Rorty, who might show you that many of the words your have tried to throw around (e.g. ontology) are philosophical debris.
The whole consciousness debate is ridiculous. No reasonable definition is being presented. You seem to take pleasure in using it in awkward contexts to show how great your (ill-defined) argument is. This technique of argument is frequently know as “question begging”.
As far as thinking about this problem in general though, I would recommend you abandon being a partisan in a tired debate and look at some current research in neuroscience. fMRI has given us insight into the brain that simply wasn’t possible when Dennett and Searle got started on this debate. Here’s one public exchange these two had twelve years ago:
http://www.nybooks.com/articles/1680
I really recommend try some of Peter Dayan’s paper’s on for size. Not the easiest stuff to get started on, but you get discussion of the brain based on solid research, not Chinese false dichotomy.
http://www.gatsby.ucl.ac.uk/~dayan/papers/index.html
I’m not Dennett’s biggest fan, but he’s a clever guy who can be a pleasure to read. You end your post looking like a glib Searle fanboy by charging that he is “insane.” The Dennett you are refering to is what many in philosophy would call a “straw man” set up by Searle.
posted by Jeremy Corbett
on March 16, 2007 #
Blockhead is interesting. Assume that the inquisitor gets the first move and makes a statement. Assume I am the inquisitor and my first statement is X. Presumably Blockhead does a lookup in its “table of initial statements” and gives the corresponding response Y (ignore the fact that even this hypothetical “table of initial statements” may have more entries than the number of protons in the universe). After the end of the conversation another inquisitor comes along and starts a conversation with statement X. Blockhead of course does not know whether this inquisitor is me again or someone else. Say it is me and I again start with statement X - blockhead will respond again with Y, whereas a conscious being might well come up with a different response, eg “Hey, that’s weird, someone just asked me that”.
If blockhead is only judged on one single conversation it can be deterministic and will always respond in exactly the same way. A concious being would not do that. This perhaps indicates a weakness of the Turing test.
Imagine that Blockhead is going to be judged on multiple conversations, then I would say it could not pass itself off as conscious unless it keeps state from previous conversations. Imagine X is “How many conversations have you had?” If Y is
“Ten” then I could respond by saying “But you said that last time I spoke to you, so it must be at least 11”. So Blockhead would have to respond with something like “What, you believe everything I say?” or “My memory is terrible”. So Blockhead’s lookup tables must be designed to be consistent with a being that keeps no state information between conversations. I would hesitate to refer to any such hypothetical being as conscious.
posted by Ian Gregory
on March 16, 2007 #
The trouble with the Chinese Room is that is postulates that the room is possible to construct and then draws conclusions from that assumption. It’s as if I were to say “Suppose I presented you with a pig that could speak Hungarian, would you say that the pig was conscious or not?” My first reaction would be to say “show me the pig”. Somehow, people who would consider the idea of a Hungarian-speaking pig to be ludicrous are happy to accept the idea of a “Chinese-speaking room”.
Let’s go with that assumption for a moment, though. Where did the data from the room come from? Well, of course it came from someone who could speak Chinese. Somehow we “compiled” details of all the knowledge of Chinese of that person into an instruction list for the room, in an analogy with a computer program.
Let’s follow that analogy a bit. Instead of a “compiled” model, consider instead an “interpreted” model. In computer language terms these things are equivalent. Usually the only difference comes down to performance. Instead of pre-compiling the knowledge of the Chinese speaker, just put the speaker into the room and let him interpret the incoming messages and produce the output. Is there any doubt now where the consciousness of “the room” lies?
posted by Doug Clinton
on March 16, 2007 #
Let’s look at Blockhead, now. The definition says “It simply contains a list of all possible thirty minute conversations.”
What proceess could be used to compile such a list. I would assert that there are only two ways that such a list could be generated. The first is to have a person or a number of persons sit around and think up all the different conversations and record them. The second would be to have a computer generate all 30-minute combinations of words and then have one or more people filter out the ones which are meaningless.
Even if we assume that the set of conversations could practically be generated (i.e. not take longer than, say, the heat-death of the universe to compile), it is perfectly clear that the only way to generate the list for Blockhead to work on is to pass it through the mind of one or more conscious entities.
Neither this post or my earlier one on the Chinese room actually say anything about what consciousness is or might be. They simply point out that the postulates of both the Chinese Room and Blockhead are invalid, so any questions or conclusions that might arise from those problems are invalid.
posted by Doug Clinton
on March 16, 2007 #
We don’t consider the cellphone to be concious because we know that what appears to be one person talking to a box is actually there are two people having a conversation in real time through a box.
The instructions in the Chinese room are an asynchronous form of communication. The outsider is having a conversation, asynchronously, with whoever wrote those instructions. The person who wrote the instructions presubably understands Chinese.
The same is true of computer programs. The user is having a delayed, anticipated conversation with the programmer.
You can play all sorts of tricks to make devices seem alive (Tickle-Me-Elmo), but if you consider these tricks to simply be asynchronous communication all the deep philosophical moments go away.
posted by Patrick May
on March 20, 2007 #
“the postulates of both the Chinese Room and Blockhead are invalid”
Exactly. If you can assume anything then you can prove anything.
posted by David S.
on March 28, 2007 #
So, do you believe that the Church-Turing thesis is false, or that consciousness (whatever that may be) is not a computable function? Or some alternative that I have failed to list - as far as I can see those are the only possible conclusions to accepting the Chinese Room argument.
For the purposes of this question let’s assume the quantum version of the Church-Turing thesis, though I’m doubtful that quantum effects play a role in the brain (always felt rather dubious about Penrose’s arguments about that).
posted by Jack
on April 6, 2007 #
Why do you need to bring aliens into the picture? For the subjective experience you refer to as “seeing red”, other humans can’t experience it either. Imagine aliens that have the technology to take human eyes and incorporate them into their own anatomies. Does that give them the ability to have the same subjective experience as humans?
posted by EKoL
on August 4, 2007 #
You can also send comments by email.
Comments
What about the Norwegian philosopher Arne Næss? He also writes for a popular audience and has coined the ideas such as “deep ecology” and “ecological wisdom”…
posted by Albert Francis on March 14, 2007 #
“There are only two ways out of it: you can either claim that no one is conscious or that everything is conscious”
Third way: consciousness is a VARIABLE, not a BINARY.
That is, “everything is conscious” is like “everything has a temparature” - there’s a range from 0 to a very high level. We’re predisposed to read “everything is conscious” like “everything is hot”, which sounds absurd, but that’s just because of language usage.
I would say a lot of trouble is hidden in the words “given instructions on how to convert one set of Chinese symbols to another”. It may be that this turns out to mean something like “a conscious mind, or access to one in some manner” in practice. That is, it’s not really an argument, but a tautology, basically another artifact of non-rigorous language (i.e, circularly arguing that it’s a mere set of instructions, and minds are taken to be much more glorious than a set of instructions, so if a set of instructions can do what a mind does, a mind must be something else).
posted by Seth Finkelstein on March 14, 2007 #
I think part of the problem of liking Searle the philosopher is dealing with the arrogant, blowhard you will encounter if you ever have the opportunity to take one of his classes.
posted by talboito on March 14, 2007 #
Have you read anything by David Chalmers?
posted by Mike Bruce on March 14, 2007 #
One of my own favorite contemporary philosophers, Susan Haack, certainly meets your three criteria of importance, reasonableness, and writing clearly for a popular audience:
http://www.as.miami.edu/phi/haack/SummaryBio.htm
posted by Kermit Snelson on March 14, 2007 #
“The point is that we don’t assign consciousness purely based on effects. Blockhead acts like it’s conscious and a completely paralyzed person acts like they’re not, yet we all know that the first isn’t conscious and the second is.”
Actually, we do assign consciousness purely based on effects. The only things we perceive of the world are through our senses, which are effects in the physical world. There’s no other way for us to assign consciousness.
“Yet we all know” - what kind of argument is that? No, we don’t “all know” - that’s incredibly lazy thinking. You have to back up your assertions with some logic.
“Searle points out that this is impossible; information processes can’t cause consciousness because they’re not brute facts.”
What does this mean? A lot of philosophers don’t understand computers, but computers are brute facts - you can touch them - and also information processes, because they use the physical laws of the universe to perform computations.
“Because one property of computer programs is that they can run on any sort of hardware. The same program can run on your Mac or a PC or a series of gears and pulleys. Which means it must be the program that’s important; the hardware can’t be relevant. Which is patently absurd.”
Why is it patently absurd? You need to get away from these lazy assertions, and actually provide some reasons, and, you know, a logical argument. As you know, all machines which are Turing-equivalent can model any computation (see your computer science education for the proof), and the physical laws of the universe can be used to model a Turing-equivalent machine (see any desktop computer as an evidence-based fact), therefore in a very real way, the hardware is not relevant. It couldn’t really be more clear to me!
FWIW, I view the universe as a computation, and a long time ago - about the age of 13 or so - I came around to the conclusion that any computation in the universe is indistinguishable from free will / consciousness. If you replaced every one of the neurons in your head with a miniature device that acted like a neuron, then there’d be no effective difference in your behaviour, yet now your brain would be formed out of components we could model in a computer - down to the low-level physical laws, if necessary. It seems to me, that following this line of reasoning, all that is required to simulate a conscious being inside a computer (and by analogy the Chinese Room), is a thorough understanding of (1) physics and (2) the physical structure of neurons and their interconnections.
Would the result be conscious? Well, guess what - I don’t know that you’re conscious. Nobody knows anything about anybody else’s consciousness. We just have to believe what they say when they say they’re conscious, so instead, it comes down to intelligence. And the only criteria we have for deciding intelligence - in spite of your objections, which you haven’t backed up - are the senses we receive, i.e. the input into the brain, which is formed completely from the output of the simulation. I view the Turing test as a thought experiment that demonstrates that we evaluate intelligence only through outward behaviour - it’s the only way we can evaluate anything - and therefore intelligence can be simulated.
posted by Barry Kelly on March 14, 2007 #
Just one further point, after remembering a few things. Many philosophers theorizing about consciousness still still use homunculus-based approaches, e.g. see:
http://en.wikipedia.org/wiki/Homunculus#The_homunculus_argument_or_fallacy_in_the_philosophy_of_mind
Many philosophers have tried to use infinite regression of homunculi as an argument that there’s something special, or magical, about the human mind, without realizing that the physical laws of the universe can themselves form a very simple homunculus, terminating the infinite regression - thus many “disproofs” of theories based on infinite regression are actually incorrect.
The Chinese Room argument is similar: we see inside the box (we’re given special knowledge of how the “box”, aka the brain, works) to see the literal homunculus, and therefore it’s “obvious”, or “patently clear” that it (the whole system) is not conscious.
Problem is, would we make the same claim if we knew exactly what was inside the human skull-box? If not, why not? Is it any different?
To me, the human skull-box and the Chinese Room are perfectly analogous. The homunculus is literal in the Chinese Room, but it’s the physical laws of the universe in a brain or a computer. The instructions and book in the Chinese Room are the physical state and structure of the brain’s neurons. It seems to me that if you support the thesis that the Chinese Room is not conscious, then you believe that the human mind is not conscious.
posted by Barry Kelly on March 14, 2007 #
Jerry Fodor. The scope of his work is not as extensive as Searle’s, but it’s definitely important (he provides the best arguments for the so-called cognitivist approach) and has contributed very original ideas about the structure of the mind. It is a pleasure to read him - you could read any of his works without any previous knowledge of the matter and still benefit from them.
posted by Tom Berger on March 14, 2007 #
Seth writes:
It’s both, like one of those dimmer light switches that clicks when going to zero. Saying that thermostats have some level of consciousness (as Chalmers seems to do) just strikes me as crazy and that’s not a trick of language.
Huh? We’re assuming the theoretical possibility that building a computer program to solve the Turing Test is possible. Then we’re writing the program down in human language as instructions in a book. The book is not a conscious mind. This may turn out not to actually be possible in reality, but that’s why it’s a thought experiment. This is also not an artifact of language.
talboito: Yes, Searle does come off as a bit arrogant even in writing. But I’ve learned to tolerate some level of arrogance.
Barry caught a typo which I’ve now fixed (I had “effects” instead of “behavior”. His other arguments don’t seem particularly convincing to me — he falls for exactly the trap I state: he confuses epistemology with ontology.
I should clarify that this piece is not meant to be a rigorous philsophical argument, just a reflection on recent events. The reason I’ve been talking to people about this stuff is that I’ve been working on the more rigorous piece.
Barry: The difference between the Chinese room and the brain is that brains cause consciousness.
posted by Aaron Swartz on March 14, 2007 #
I think you are unfair to poor old Mr. Dennett. His usual argument against this kind of thing holds up well to the Blockhead case, for example:
When you say “Obviously such a list would be unreasonably long in practice, perhaps even when heavily compressed, but let us play along theoretically.” you are asking us to play along with something unrealistic - and that’s the problem. How long is “unreasonably long” - well, think how big a list of all 30-minute conversations are. If each word could be followed by one of 10 others, and speech happens at 1 word per second, then that’s 10^(180) conversations. So it is reasonable to say “No - I will not play along, even theoretically”.
posted by tom s. on March 14, 2007 #
I meant to finish with “end of conversation - intelligent or otherwise”
posted by tom s. on March 14, 2007 #
What is it about functionalism that allows it to resist thought experiments? Functionalists certainly like to have their zombies.
posted by Aaron Swartz on March 14, 2007 #
What is your definition of consciousness?
You are arguing what it means to be conscious but haven’t defined what consciousness is.
posted by Sean Abrahams on March 14, 2007 #
Why is it patently absurd?
At the macroscopic scale, yes. Once you get into quantum mechanics all bets go out the window (cats in boxes and such).
Are we sure that the brain isn’t just a Chinese box? Do we understand enought about it to say this?
posted by David Magda on March 14, 2007 #
Boy howdy, do I wish I could talk to you about this in person. This question has been irritating me since I took my first cog sci course, and I’ve never spoken about it to anyone who’d even heard of this functional argument against the Chinese room. It might have been addressed in that very class, I just wasn’t much of a student.
posted by Ben Donley on March 14, 2007 #
bowwow?
posted by Really? on March 14, 2007 #
I don’t go in much because I think you are only playing with ‘American tradition’ context. But the word ‘consciousness’ - and epistemology or ontology - these aren’t used very well in this piece.
Let’s say Babbage probably didn’t think of Turing test (He didn’t. Wicked enough!). But maybe kids in the bedroom given teddy bears to play with (19th century?) gave souls and everything they could give - to their teddy bears.
Then Turing’s test came. (and yeah Strong AI school, bit later, with some very particular kid of information theory thing) And then ripples in the pond.
But then you got Johns Hopkins’s Ben Carson’s one argument - or suggestion about the topic of the argument, why we let other humans to waste what their brains potentially got.
Say we all got good enough hardwares, but we don’t want to install softwares, and we tolerate hardwares being wasted with bad, poorly written softwares. (Go Tenderloin or Hunters point or Richmond in east bay or East Oakland, say. You see potential Mac Tower Pro or supercomputers playing - or stuck with - 30 years old Atari games - or even something less than that.)
He did that argument in some places - in talk to public, and seems it didn’t fly.
So we are going back to teddy bears again. Such a cool argument. I can’t wait to see your real piece.
posted by IT kids are concerned about CPUs huh? on March 14, 2007 #
It’s an old book so there may now be better but, ‘The Mind’s I’ goes through all of this (as does ‘Permutation City’, mind-blowingly, by Greg Egan). I don’t claim that the following is Hofstadter’s answer though.
I think it’s possible that a human could animate a stop-motion consciousness, with symbols etc. And that, I wouldn’t have a problem with accepting as conscious.
What you’ve put as Searle’s big comeback about causality I don’t understand. I certainly don’t think that processing information makes machines conscious. I don’t know what you or Searle are arguing, so maybe I’m on your side and don’t realize it but, I distrust any argument that takes concepts that are pre-experience or understanding of what consciousness is, and tries to run with them. Minsky sums it up for me:
“It is too easy to say things like, “Computers can’t do (xxx), because they have no feelings, or thoughts”. But here’s a way to turn such sayings into foolishness. Change them to read like this. “Computer can’t do (xxx), because all they can do is execute incredibly intricate processes, perhaps millions at a time”. Now, such objections seem less convincing — yet all we did was face one simple, complicated fact: we really don’t yet know what the limits of computers are. Now let’s face the other simple fact: OUR NOTIONS OF THE human MIND ARE JUST AS PRIMITIVE.” (my emphasis)
posted by Mind's I Reader on March 14, 2007 #
I haven’t read the above comments. I suppose I should. What if Blockhead had no record of conversations, only a set of rules hardwired in, and a large but lossy memory, and a clock that wasn’t quite perfect. He, let’s call Blockhead he, I think there’s a Peanuts character named Blockhead, or at least on of them calls another Blockhead. Anyway…
Blockhead, over time, receives audio input and one of his hard-wired rules is to drive a speaker at varying frequencies. Another rule is to keep track of how many times a particular driven pattern through the speaker is followed by another pattern. And record all of this, again, in a lossy sort of way. Loosing some random fraction of what’s been collected, at a rate that falls off with time. That is, there’s always a lot coming in, so Blockhead is always shedding a lot of recent data, but there is a pile of old stuff that slowly builds up. And even some of that is lost, but the rate functions are balanced so there’s a slow accumulation. Another rule is to pattern-match. Pattern-match all sorts of things, and assign those patterns over time, the ones that happen so frequently they aren’t lost, assign random names to those patterns, and put those random names at specific addresses. And even those addresses will be lost if they aren’t reinforced. But another rule Blockhead follows is to repeat driving patterns on the speaker if a pattern of response is detected. Add in a rule to favor own survival over survival of another.
Over, and over, and over. All the time. He’ll develop grammar, syntax, style. A sense of time. A sense of rules for conversation. Confabulating many possible responses, he will have access to rules, tried over time, that allow him to choose some confabulations that are better than others, and drive the speaker with those.
Maybe throw in some additional shortcuts. Add a video feed. Favor visual patterns with a lot of green on the bottom, and blue on top. Favor visual patterns that look like this :) only rotated 90° clockwise. Add in the ability to distinguish chemical smells and similar lossy association patterns can be ruled between smells and :) patterns and green/blue patterns, and certain audio patterns.
Make Blockhead reproducible, with a lossy pattern of the program. Make a pattern such that Blockheads can’t tolerate another Blockhead who’s responses vary too far from the patterns that he recognized in the the first few years of life. Make Blockhead want to destroy Blockheads that don’t match the pattern. You’ll have Blockheads that more and more efficiently learn to recognize patterns, and you’ll have clusters of Blockheads. Predictably, Blockheads near the borders of their clusters will at once destroy each other more often, and end up with new patterns and an urge to integrate the groups. I’ll bet they even naturally select code patterns that the original designer never predicted, more closely aligned to survival in their environment.
Don’t give me the usual crap about random numbers not being truly random. Schrodinger’s equations assure us that those tiny little atoms that “cause” solids (please, “causation” hasn’t a thing to do with it. Have you considered that it just is? Maybe the math just fell out that way in this particular instance of a universe?) will eventually induce measurable randomness into the clock. Even NIST’s clocks aren’t perfect. Not perfect.
How much. odds. These things are fundamental.
Random chance and self-repeating patterns condensing out of a huge chaos over vast amounts of time. Causation at the quantum level and causation at the level of human affairs are two things that follow different rules. Not because there’s a definitional difference in causation, but because there are so many how many, how much questions of probability in between.
Of course the biological Blockhead is self-aware by every possible test and entirely able to transcribe meaningless symbols. Pattern-matching and lossy memory and a fuzzy clock over time lead to this mental model.
The academies of philosophers and economists suffer from the same failure of mind: we can confabulate any situation there ever could be, but it only gets interesting when we start trying to find out “how much”. Pursuing theory without observation is tantamount to murder. So is observation without doubt.
posted by Niels Olson on March 15, 2007 #
“Saying that thermostats have some level of consciousness (as Chalmers seems to do) just strikes me as crazy and that’s not a trick of language.”
It’s like saying “an ice cube has some level of heat”. So you react “That’s crazy! An ice cube is cold, not hot!”. Would it sound crazy to say (a similar but not identical idea) “thermostats have some level of reaction to their environment”? It’s obviously talking about a trivially small amount of a quantity we think of, linguistically, in terms of a high level of the quantity. It would help to distinguish between “temperature” (the quantity) and “hot” (high level the quantity). The trick of language is that “consciousness” is used in both senses (like the word “heat”).
“Huh? We’re assuming the theoretical possibility that building a computer program to solve the Turing Test is possible.”
That’s right. ASSUMING! The point is that assumption itself is arguably circular or tautological. The artifact of language is that it allows you to hide that circularity or tautology from inspection. It’s like Maxwell’s Demon: “assume we could distinguish between hot and cold (high and low energy) particles, then … We are assuming that … In theory …” The problem here is that it turns out that to do the operation of “distinguish” is dubious, in terms of possibly taking more energy itself. Similarly, the glib phrase of “writing the program down in human language as instructions in a book” may not be possible in way that doesn’t turn out to be, in practice, “create an artificial intelligence”.
posted by Seth Finkelstein on March 15, 2007 #
This piece was way below par for you Aaron. Most of the things that you said were “obviously” the case are not at all obvious. It is like a Christian saying well “obviously” God exists - obvious to a Christian yes, but not to an Athiest. You just made a load of unsubstantiated statements of faith.
For what it’s worth, all of Dennet’s points seem quite reasonable to me, but I wouldn’t go so far as to say that they are “obviously” true, because they clearly aren’t from your perspective. I admit I haven’t read widely on the subject (Penrose and Hofstadter are my other two main jumping off points) but I have had quite a few more years than you to mull it over.
It seems to me that you are simply defining consciousness as something that occurs in brains. If we accept your definition the clearly it can’t occur in computers. That is no better or worse than defining conciousness as whatever property is shared between all potential systems that could pass a Turing test, it is just a different definition. Whatever definition of consciousness we might come up with, clearly it should apply to human beings (both the above definitions do). Beyond that, who can say whether an ant is concious whereas a hypothetical computer program that passes the Turing test is not? Or the other way round? It entirely depends what you mean by the word.
In fact, some people say that human beings are not conscious in general and that consciousness is only achieved in certain enlightened beings after years of meditation etc. Or that there is only a single Universal consciousness etc.
Answers are meaningless unless you know what the question is and I don’t think we do. It is like asking for the answer to “Life, the Universe and Everything.”
I expect better from you and look forward to a future post on the subject in which you actually have something substantive to say.
posted by Ian Gregory on March 15, 2007 #
Well this is a fascinating topic that I’ve been thinking a lot about lately. I have not studied it much yet, so I may be ignorant of one thing or another. But I have to say that Aaron’s argument is almost completely unsatisfying. I’m not even sure what his point is. This is a little bit of a ramble; my apologies.
I honestly don’t even know what that means. It’s hardly the knock-down argument I was waiting for. Reminds me of the kind of thing Thomas Aquinas might say — my apologies for such a shoddy kind of reply; it’s just I don’t understand the statement well enough to make a rational reply to it.
We know that consciousness, whatever it is, exists, because we experience it. But we don’t yet know what causes it. We’re ruling out appeals to phenomenon that doesn’t come down to physics.
Saying that humans do not consciously understand English is false more or less by definition — what else can we mean by consciousness?
How does this follow? I can’t say that certain kinds of arrangements of atoms, whose defining characteristics are as yet unknown, are conscious, and others aren’t?
Obviously consciousness is created by the brain (at least, that’s what the behavior of other apparently conscious systems would indicate, when their brains are destroyed). The only conceivable explanation, to me, is that consciousness is created by some process of computation, because there is nothing else there, that I know of, in our brains.
I have to agree with the guy who quoted Minsky. Something in the way our brain’s atoms are configured creates our experience, which we can only guess is something different from the experience of an insect or an ordinary computer program. If a chess playing program has state that registers that it is losing, is that the same as experiencing pain for a simple animal? For a human? If not, what is the difference? State that registers the state?
Basically I don’t see the relation between these questions, which seem the pertinent ones to me, and what Aaron is talking about.
The hardware is just following instructions. How do we know that computer hardware, plus a large pile of state, plus some transition rules, is not conscious? Isn’t that pretty much what our brains are, as far as we know?
How is this different from being caused by computation?
Here’s awaiting your real post — in the mean time, unsubstantiated rambles are what you get back, from me at least :)
posted by David McCabe on March 15, 2007 #
Searle’s Chinese Room experiment is a trivial misdirection. He focuses on the man in the room matching symbols rather than the creator of the semantic and syntactic translation rules. That designer was conscious. The man in the room is working unconsciously. When I speak my mouth and vocal cords do the translation from nerve impulses to sound patterns but it is entirely unconscious. You have to follow the trail back into the brain where you get lost because consciousness is an emergent property of the neural networks, not a property of the machinery at all.
posted by James Vornov on March 15, 2007 #
What’s fascinating about the Chinese Room is that it’s a great litmus test. People on both sides of the issue are absolutely clear on their position and think the people on the other side are being deliberately obtuse.
My own take on it: I admit I’m deeply confused by why the information processes in my head would lead to my real subjective experience, but since I doubt that neurons have any essential properties that transistors lack, I have to conclude that software, in neuron or transistor form, can somehow lead to consciousness. If there’s some other process beyond that, adding qualia on top of the information processing, then I think that would have to be a property of the universe itself, not neurons as such.
What I’ve always wanted to ask Searle is this: what exactly is it that you think neurons are doing, physically, that transistors can’t do? If I invented transistors that added that capability, could I build a consciousness out of them?
posted by Chris on March 15, 2007 #
Seth writes:
Consciousness isn’t defined as level of reaction to the environment. Consciousness is defined as subjective first-person experience. I don’t think thermostats have subjective first-person experience. I do not think there is anything that it is like to be a thermostat.
I don’t see the circularity. The argument is:
posted by Aaron Swartz on March 15, 2007 #
You got more comment from this one than any previous!!
Try one of Bucky’s faves: Korzybski. Science and Sanity was online, dunno if still there.
Love.
posted by William Loughborough on March 15, 2007 #
“Consciousness isn’t defined as level of reaction to the environment. Consciousness is defined as subjective first-person experience.”
Then that’s using the “large amount” sense of the word. In that sense of the word - again, IN THAT SENSE OF THE WORD - I don’t think anyone would be claiming thermostats have consciousness. They might say that using a different sense of the word, one akin (though not identical) to “reaction to the environment”.
I don’t see the circularity. The argument is:
2 does not necessarily follow from #1, in a very deep reasons. A program which could pass the Turing Test might be too complex to “write it down in a book” in any practical sense. You can’t even “write it down in a book” in practice all the software that runs a modern computer operating system - that is, the source code is just too huge. Sure, you can start playing games, and say it’s a big, big, book with tiny, tiny, fonts, but then that is just saying it’s information.
C’mon - you’ve debugged programs. You KNOW how complicated this is in practice. What does “follow it” mean? It just may mean “Create a process of such complexity that it’s a consciousnesses”.
And this is where we start getting circular. All 1-3 basically say, is that if consciousness is an information process, then following an information process acts like consciousness. That’s circular. What 1-3 tries to do is say information processes are trivial things, by using trivializing language by assumption - i.e. “write it down in a book” IMPLIES, emotionally, “trivial”. That’s where the linguistic artifacts come in.
He doesn’t. The program does. Another way of phrasing 1-4 is “Assume we could build an artificial intelligence which passed the Turing Test in Chinese. Then a man could run that AI program and ask it something in Chinese, and it would answer in Chinese.”. Which is rather tautological.
Only because you hid an assumption of that in the first place, by implicitly describing the AI above as not consciousness.
posted by Seth Finkelstein on March 15, 2007 #
Or in a more straightforward manner than Seth.
The Chinese Room assumes that the process of translating a language is an enumerable process, and hence describable in software on a Von-Neuman machine. Of course it isn’t, so it can’t be.
The reality of both human language comprehension and translation of such is far more complicated, and consciousness, which is a requirement for doing either or both activities in a recognizably human fashion, will therefore not be expressible in a Von-Neuman machine architecture.
Consciousness is not a counting problem, and can not be reduced to a counting problem. It is some other beast.
You can take an aspect of the “thinking process” and reduce it to a counting problem, but you can not combine some number, or even an infinite number, of counting problems to create human recognizable thinking.
posted by smacfarl on March 15, 2007 #
smacfarl:
You can use a ‘counting’, Von Neumann machine to represent floating point numbers to any precision you like, and use those to simulate a brain complete with juices in the grey matter and the firing of neurons (go down to the molecules if you like). Of course no actual ‘man in a room’ could give you an answer in his lifetime but it’s just a thought experiment. If this system represents a Chinese speaker’s brain and it’s a faithful reproduction I see no reason why it wouldn’t count as a consciousness. Don’t be mislead by the fact that its ‘life’ would run very slowly; it won’t notice.
posted by Mind's I Reader on March 15, 2007 #
As far as I can tell, Aaron is saying that consciousness is a physical thing that happens in brains. Our experience of consciousness is a physical characteristic of what working brains do. Something like that. In which case, a computer program might be able to perfectly model it, but it would not actually be the thing itself. A computer could perfectly model a rock, and tell you everything about the rock’s behavior, but that internal modelling would not be a rock.
Of course, it is possible that consciousness is a physical thing.
IMHO, However, whether it is true is not shown by the example of the Chinese room. It is also possible that the invisible dragon in my garage is a physical thing. Since we don’t as have any way of detecting the difference between this type of consciousness and a simulated consciousness, it’s a lot like saying that consciousness is an invisible, undetectable, dragon in your head.
So, hopefully, Aaron can supply us with some reason that consciousness is a physical thing. Sorry if I’m acting obtuse.
Also, the bulk of Searle’s description of consciousness as a “natural phenomenon natural phenomenon which developed through evolutionary processes and is caused by the actions of the brain” does not conflict at all with the functionalist’s Information Process idea of consciousness. The only part that conflicts is the idea that it is a physical thing.
Prove that, and you won’t have to deal with this argument anymore.
posted by Ben Donley on March 15, 2007 #
Aaron,
the neuroscience researchers, neurologists and psychiatrists define consciousness as level of response to the environment. Subjective self-awareness is a definition that enables circular arguments about consciousness. Its navel-gazing. If you want some mind-breaking logic, go do quantum physics or take a higher math class taught in the method of R. L. Moore. I’m sure somebody at Stanford or MIT is teaching one.
The Summer of Code thing sounds fun and helpful,
Niels
posted by Niels on March 16, 2007 #
The blockhead argument seems very misleading to me. Some reasons:
You can’t build it in our universe, or in any universe remotely like ours. (Even if the universe was big enough to contain the information required for blockhead, it wouldn’t be able to access it quickly enough to carry on the conversation without transmitting information faster than the speed of light.) This isn’t a pedantic quibble - thought experiments are only useful when their premise can’t be ruled out entirely. You’re claiming that the turing test is bad because we can build this obviously unconscious system that could pass it - but we can’t build it - not even close.
Even if it could exist, you immediately run into a problem with your definition of “all possible” conversations. Which conversations are possible and which aren’t? Are you restricting it to conversations that would not be implausible to have with a mentally sound person? Or does it include all conversations that are grammatic, even if they are completely non-linear? If you choose the latter, it probably wouldn’t even seem conscious, if you choose the former, it is hard to see how you could build something like blockhead without creating an AI to build blockhead. And that means that blockhead is just an expansion of all the possible ways that the state of that AI could evolve over the course of 30 minutes. And here is where the slight-of-hand comes in: You know that people won’t intuitively agree that the AI can’t be conscious, since it is after all an AI. So you try to trick them into calling an extremely inefficient representation of that AI unconscious, because it’s easy to say “a big list isn’t conscious” as long as you don’t stop and think about how big “big” is here.
Ignoring the above point, a list of all possible conversations is not sufficient. Conversations involve choices made by both parties - for example if I ask blockhead what its favorite color is, it can say “red” or “blue”. What part of blockhead makes that choice?
posted by on March 16, 2007 #
I personally recommend Richard Rorty, who might show you that many of the words your have tried to throw around (e.g. ontology) are philosophical debris.
The whole consciousness debate is ridiculous. No reasonable definition is being presented. You seem to take pleasure in using it in awkward contexts to show how great your (ill-defined) argument is. This technique of argument is frequently know as “question begging”.
As far as thinking about this problem in general though, I would recommend you abandon being a partisan in a tired debate and look at some current research in neuroscience. fMRI has given us insight into the brain that simply wasn’t possible when Dennett and Searle got started on this debate. Here’s one public exchange these two had twelve years ago:
http://www.nybooks.com/articles/1680
I really recommend try some of Peter Dayan’s paper’s on for size. Not the easiest stuff to get started on, but you get discussion of the brain based on solid research, not Chinese false dichotomy.
http://www.gatsby.ucl.ac.uk/~dayan/papers/index.html
I’m not Dennett’s biggest fan, but he’s a clever guy who can be a pleasure to read. You end your post looking like a glib Searle fanboy by charging that he is “insane.” The Dennett you are refering to is what many in philosophy would call a “straw man” set up by Searle.
posted by Jeremy Corbett on March 16, 2007 #
Blockhead is interesting. Assume that the inquisitor gets the first move and makes a statement. Assume I am the inquisitor and my first statement is X. Presumably Blockhead does a lookup in its “table of initial statements” and gives the corresponding response Y (ignore the fact that even this hypothetical “table of initial statements” may have more entries than the number of protons in the universe). After the end of the conversation another inquisitor comes along and starts a conversation with statement X. Blockhead of course does not know whether this inquisitor is me again or someone else. Say it is me and I again start with statement X - blockhead will respond again with Y, whereas a conscious being might well come up with a different response, eg “Hey, that’s weird, someone just asked me that”.
If blockhead is only judged on one single conversation it can be deterministic and will always respond in exactly the same way. A concious being would not do that. This perhaps indicates a weakness of the Turing test.
Imagine that Blockhead is going to be judged on multiple conversations, then I would say it could not pass itself off as conscious unless it keeps state from previous conversations. Imagine X is “How many conversations have you had?” If Y is “Ten” then I could respond by saying “But you said that last time I spoke to you, so it must be at least 11”. So Blockhead would have to respond with something like “What, you believe everything I say?” or “My memory is terrible”. So Blockhead’s lookup tables must be designed to be consistent with a being that keeps no state information between conversations. I would hesitate to refer to any such hypothetical being as conscious.
posted by Ian Gregory on March 16, 2007 #
The trouble with the Chinese Room is that is postulates that the room is possible to construct and then draws conclusions from that assumption. It’s as if I were to say “Suppose I presented you with a pig that could speak Hungarian, would you say that the pig was conscious or not?” My first reaction would be to say “show me the pig”. Somehow, people who would consider the idea of a Hungarian-speaking pig to be ludicrous are happy to accept the idea of a “Chinese-speaking room”.
Let’s go with that assumption for a moment, though. Where did the data from the room come from? Well, of course it came from someone who could speak Chinese. Somehow we “compiled” details of all the knowledge of Chinese of that person into an instruction list for the room, in an analogy with a computer program.
Let’s follow that analogy a bit. Instead of a “compiled” model, consider instead an “interpreted” model. In computer language terms these things are equivalent. Usually the only difference comes down to performance. Instead of pre-compiling the knowledge of the Chinese speaker, just put the speaker into the room and let him interpret the incoming messages and produce the output. Is there any doubt now where the consciousness of “the room” lies?
posted by Doug Clinton on March 16, 2007 #
Let’s look at Blockhead, now. The definition says “It simply contains a list of all possible thirty minute conversations.”
What proceess could be used to compile such a list. I would assert that there are only two ways that such a list could be generated. The first is to have a person or a number of persons sit around and think up all the different conversations and record them. The second would be to have a computer generate all 30-minute combinations of words and then have one or more people filter out the ones which are meaningless.
Even if we assume that the set of conversations could practically be generated (i.e. not take longer than, say, the heat-death of the universe to compile), it is perfectly clear that the only way to generate the list for Blockhead to work on is to pass it through the mind of one or more conscious entities.
Neither this post or my earlier one on the Chinese room actually say anything about what consciousness is or might be. They simply point out that the postulates of both the Chinese Room and Blockhead are invalid, so any questions or conclusions that might arise from those problems are invalid.
posted by Doug Clinton on March 16, 2007 #
We don’t consider the cellphone to be concious because we know that what appears to be one person talking to a box is actually there are two people having a conversation in real time through a box.
The instructions in the Chinese room are an asynchronous form of communication. The outsider is having a conversation, asynchronously, with whoever wrote those instructions. The person who wrote the instructions presubably understands Chinese.
The same is true of computer programs. The user is having a delayed, anticipated conversation with the programmer.
You can play all sorts of tricks to make devices seem alive (Tickle-Me-Elmo), but if you consider these tricks to simply be asynchronous communication all the deep philosophical moments go away.
posted by Patrick May on March 20, 2007 #
“the postulates of both the Chinese Room and Blockhead are invalid”
Exactly. If you can assume anything then you can prove anything.
posted by David S. on March 28, 2007 #
So, do you believe that the Church-Turing thesis is false, or that consciousness (whatever that may be) is not a computable function? Or some alternative that I have failed to list - as far as I can see those are the only possible conclusions to accepting the Chinese Room argument.
For the purposes of this question let’s assume the quantum version of the Church-Turing thesis, though I’m doubtful that quantum effects play a role in the brain (always felt rather dubious about Penrose’s arguments about that).
posted by Jack on April 6, 2007 #
Why do you need to bring aliens into the picture? For the subjective experience you refer to as “seeing red”, other humans can’t experience it either. Imagine aliens that have the technology to take human eyes and incorporate them into their own anatomies. Does that give them the ability to have the same subjective experience as humans?
posted by EKoL on August 4, 2007 #
You can also send comments by email.