It continually amazes me how many otherwise-intelligent people I know claim to be fans of Daniel Dennett, a bitter hack philosopher who spends his days sucking up to scientists and writing personal attacks on other philosophers. As Daniel Davies put it, “I used to be a rabid Dennettite [until] I started reading more widely in the subject, and found that Dennett had been pretty (no, make that very) badly behaved […] And that’s when the hate developed.”
At some point it feels unfair to keep picking on the guy, but I came across a gem that, even after looking at it for months, still manages to amaze me. Here, in full, is Daniel Dennett’s argument determinism is compatible with free will.
(For context, this comes after pages discussing Conway’s game of life, in which some deterministic animated squiggles don’t bounce into (“avoid”) other animated squiggles.)
(Gazzaniga and Steven, p. 65, summarizing Dennett’s Freedom Evolves, p. 56)
One just has to marvel at the sheer stupidity it takes to advance such an argument, much less base a 368-page book on it. I mean surely in the course of writing such a book you would come to notice that your core argument is based around a pun. (Shame also on Gazzaniga and Steven, who also base their argument on this absurd piece of “logic”.)
Yes, Daniel Dennett is literally arguing that because in some deterministic animations depict things being avoided, determinism does not imply inevitability. (It would seem an obvious corollary that Mickey Mouse has free will.)
I don’t get the problem. What part of that don’t you understand?
posted by Jamie McCarthy
on January 19, 2008 #
I guess to be more precise, I should ask where you think the contradiction is. Explaining that more carefully will help me identify more precisely what it is you don’t understand. (Your implication that Mickey Mouse cartoons are a good example suggests to me you’re confused about intentionality and perspective, I guess, but I can’t say for sure. Partly because the prior sentence doesn’t scan.)
posted by Jamie McCarthy
on January 19, 2008 #
I thought it was fairly obvious. The flaw in the argument is line 3, at least assuming we are using inevitable in its traditional sense in such discussions.
Basically, the fact the fact that an object in a deterministic world is avoidable in the sense that other deterministic-world objects appear to avoid it, does not mean it is evitable, in the sense that the deterministic object could do otherwise. Running through the argument with Mickey Mouse:
- In some deterministic cartoons there are mouses avoiding walls.
- Therefore, in some deterministic cartoons some things are avoided.
- Whatever is avoided is avoidable, or evitable.
- Therefore, in some deterministic cartoons not everything is inevitable.
- Therefore, determinism does not imply inevitability.
While the cartoon wall was, in principle, avoidable, it was inevitable that the deterministic Mickey Mouse would walk into it.
posted by Aaron Swartz
on January 19, 2008 #
Aaron,
I’d expect you to know better than to waste time arguing about definitions with philosophers. Exactly how we define vague concepts like free will, evitability, or avoidability, doesn’t really matter, since there isn’t much to build on those definitions. They just don’t seem to fit into a rigorous logical framework, so let’s shrug, and focus on things that do.
Is there other work of Dennett’s that frustrates you? In particular, what do you think of his theories about the Darwinian evolution of memes?
Cheers,
Andrey
posted by Andrey Fedorov
on January 19, 2008 #
“Free will” as most non-philosphers mean, is like Medival free-will. Here, “agents” are the pan-ultimate cause of their actions and they thus could not be predicted with a machine that knows everything about the present (arguably, then, the future is not entirely knowable).
“Free will” as “compatibalist” call it, is about the the will (attempt to perform actions) as being “unconstrained”. This is called “Hobbian free-will”.
-For the keen physicalists (where qualia are are subset of the physical, like gravity/fields and such), “free willing” agents don’t feel constrained or oppressed. They feel how they are programmed to: if someone holds a gun to your head and takes your money, you will felt unfree, whereas if you are having fun with some freinds, even though it was determined physically a long time ago, it feels unconstrained.
-For the unkeen and foolish physicalists out there (who think robots=humans and CPU chip/AI complexity = consciousness), no coherent definition seems to be given.
So really, “compatibalism” is a misleading and confusing word. Determinism is not compatible with “medival free-will”, but it is compatible with “Hobbian free-will”.
So…what we have here are definition wars as usual ;)
posted by Aaron Schulz
on January 19, 2008 #
I’m not in a position to give advice or something, but: Aaron, do you really care? Why, if you do?
posted by ivan kurmanov
on January 19, 2008 #
That has to be the most absurd definition of evitable (not to mention harms), I’ve ever seen.
Jamie: please do explain.
posted by Jacob Rus
on January 19, 2008 #
I’m with some of the other commentators in that I’ve never really understood why people feel so strongly about free will. Given that a universe with free will is indistiguishable from one without free will, it seems to me not to matter what we call our particular one.
But I did think (despite disliking the whole seventies-retro-feel that the game of life thing brings to the book) that Dennett had a point in the first 50 pages of his book. Humans are open systems, and our dynamics are not determined by knowledge of any small subset of the universe.
posted by tom s.
on January 19, 2008 #
I’m no fan of DD, but I would not base my criticism of him on a summary somebody else has written of one of his books. Have you read the actual book? (I haven’t, hence my lack of opinion on the subject).
posted by Björn
on January 19, 2008 #
Why would Denett bother to argue that determinism doesn’t imply that everything is un-dodge-able? That’s a claim any incompatibilist would give him. I can’t believe he would be stupid enough to then go on and claim that inevitable-as-undodgeable = inevitable-as-had-to-happen-given-the-laws-of-physics-plus-initial-conditions;
But if he doesn’t, then there’s really no point to giving this argument in the first place. Maybe G&S misunderstood whatever he actually said?
posted by Liron Greenstein
on January 19, 2008 #
the interesting stakes in this case is that gazzaniga uses dennett’s argument to show that justice and law are relevant in a world of deterministic neurobiology, because our will is the underpinning of all european law.
to the extent that we choose evil without coersion it’s our society’s right and obligation to punish us. we presume our moral decisions occur outside of the context of environmental forces - anything that forces the hand of an individual, even nature, removes personal responsibility and renders the legal system irrelevant. this is why we have coercion defenses and insanity pleas.
the avoider language is important- it’s specifically about avoiding committing crime. it’s really troubling to see a legal scholar grasping at straws like this. i have a roomba that wanders in a random walk around my house, but avoids falling down stairs due to a sensor. dennett and gazzaniga seem to be saying by extension, because of the existence and observed behavior of my robot, robots that do fall down stair have chosen to fall. therefore if falling were a crime, they could be prosecuted for it. of course, there’s no will here, there’s just a sensor, which is present or not, and working or not. these men have nothing based in the science and the medicine, merely defective logic based on knowing little about cellular automata, and a kind of willful semantic ignorance of the language they are using.
i don’t think the argument for the legal system is as weak as gazzaniga makes it, but it points to medical research being highly disruptive to society’s ideas of justice in the next few decades. i hope better arguments than dennett’s are put to the task.
posted by q
on January 19, 2008 #
I don’t find this argument to be much dumber than Searle’s argument that computers can’t be conscious because they aren’t “brains” (a brain, of course, being that which is conscious) - whatever the reason why you fail to see the stupidity of Searle’s argument, people probably fail to see the stupidity of Dennett’s argument for the same (or a similar) reason.
posted by Mark
on January 19, 2008 #
It seems evident that people, including Aaron, care about this because it’s another way of posing the question of (my paraphrasing) “Do we have mystical souls of a divine nature (free will), or are we mere machines no different from computers (determinism)?”
We are like computers == determinism
We are transcend physicality == free will
posted by Seth Finkelstein
on January 20, 2008 #
a) yes, the pun is terrible.
b) Andrey’s comment about wasting time “arguing about definitions with philosophers” is somewhat disconcerting. That sort of thinking goes a long way toward convincing me to not buy another philosophy book outside the classics, ever.
c) You all need to go contemplate the consequences of quantum mechanics and thermal physics (especially Brownian motion) on synapses and other cellular ligand-receptor systems. You should also contemplate how neurons, or small systems of neurons, will likely prove to be Turing complete. One example of how cells could be Turing complete (1) control: the universal DNA-RNA-protein dogma, (2) program: the unique cellular machinery of each neuron, (3) data: synaptic and interstitial inputs that stimulate the machinery to do things that lead to other outputs. Also contemplate the lossy nature of this biological computer: entropy, leaky integration, thermal losses, all conspire to insert a fair amount of random data into to Turing machines.
Along those lines, explain why psychoactive drugs work. I guarantee a person on clozaril and haldol will have a different set of thoughts one hour after the injections that they would if they’ hadn’t taken the injections.
I’m pretty sure that once the dust settles, maybe in this century, we’ll find that while the neurons, or small sets of neurons, may be turing complete, there’s simply to much quantum randomness in the brain (synaptic and interstitial) to advance any simple deterministic model, yet at the same time, it will be difficult to argue that the mind transcends the physical processes that occur within. If anything, we will probably extend the entity which is the person to include the whole person. You lose your pinky toe, you’ll have different thoughts. You may still be able to work on high level math or debate the merits of the death penalty, but if you loose part of your pancreas, or a kidney, the chemistry of the interstitial fluid in your brain will be oh so slightly different, or will degrade more quickly over time. Something will change in your thinking.
In a very real way, you can manipulate your own future thoughts be manipulating your current environment. If you start buying fewer processed snack foods then you’ll be less likely to think about weight loss programs in 20 years. Of course, you’ve been manipulating your own thoughts for years. Or have you? Where did the first manipulations come from? Or were those simply the condensations of many random events that piled up over time, influenced, remember, by environmental stimuli too, environmental stimuli that have been effecting your outcomes all the way since your mother’s eggs stopped reproducing while she was in utero, leading to decisions her body made about which oocyte follicles would mature, and your father’s net pH the night you were conceived, leading to a subtle shift in which spermatocytes could gain entry to the maternal oocyte, on which night, and whether or not she drank that night, which might affect, slightly, when your neural tube closed, and some weeks later, how much glucose she had on board when it was time for your heart to start beating … did it start a few minutes later, slowing the rate of division of neurons in your brain for a few moments?
While the whole thing may be deterministic, after allowing for quantum mechanics and initial conditions, the machines are so numerous that the system is so complex that the only functional way to get through life is to act on what we perceive to be free will. The Green’s fMRI results of the trolley problem hint at this.
posted by Niels Olson
on January 20, 2008 #
Mark: Maybe because that’s not Searle’s argument.
posted by Aaron Swartz
on January 20, 2008 #
Aaron,
It’s much more honest to do another post professing your love of Searle then to call Dennett dumb. As has been pointed out, you have probably done very little to read any Dennett without an agenda going in. Dennett has written books which do have content that is not personal attacks, e.g. many people familar with his recent work Breaking the Spell.
There is a temptation to suggest, and I will succomb to it, that you are embarrassed by how much you are an advocate of Searle. Your post does not mention the name but the “dsquared” quote that you dug from 2002 did not finish this sentence:
…Dennett had been pretty (no, make that very) badly behaved when it came to falsely attributing strawman positions to John Searle, and not admitting it when corrected.
Now someone has brought up the question of why you are so concerned about the issues of determinism and free will. I am curious about this also. In your article about the game you leapt rather quickly to the notion that we are so much more than machine. What are you so worried about? Determinism versus free will is an old concern and lots of people have moved on to more reasonable debates. From my point of view, we know its far too hard to predict relatively simple phenomenon (on our scale length, ignore the fact that quantum mechanics is probablistic) compared with humans. Why bash Dennett when all you want to say is that you are a real boy?
posted by Jeremy Corbett
on January 20, 2008 #
I disliked Dennett long before I’d read anything by Searle, so I do not think your explanation is true. It’s true that Searle is something of the anti-Dennett, but I think that’s more of an explanation of why I like Searle than why I dislike Dennett.
I’m not worried about free will (in this piece); this just happens to be the example someone showed to me.
posted by Aaron Swartz
on January 22, 2008 #
BTW, I think it’s intriguing that nobody (save perhaps Jamie) has tried to defend the argument. Everyone’s either: attacked me personally for one reason or another, attacked the choice of subject (certainly not mine!), etc.
posted by Aaron Swartz
on January 22, 2008 #
I’ll repeat Bjorn’s question: Aaron, have you read Freedom Evolves?
posted by Jamie McCarthy
on January 22, 2008 #
Aaron, I passed on the argument itself explicity because I don’t think it’s really at issue. I was clear on that. But it’s not really surprising that if you make a blog post along the lines of so-and-so is dumb because of complicated philosophical argument, the most attention is drawn by the so-and-so is dumb part rather than the complicated philosophical argument.
posted by Seth Finkelstein
on January 23, 2008 #
Aaron,
I’m not surprised that you disagree with my characterization of Searle’s argument, but I have not been able to understand it in any other way, even after asking him personally about it when I saw him speak several months ago.
I asked him why a computer simulation of a brain couldn’t “be” (or contain) consciousness, and his response, verbatim, was “simulations leave something out”. If his position is that any conceivable simulation of a brain must leave something out, it is hard to imagine what the something must be if not gooey gray and white matter. IOW, computers can’t be conscious because they aren’t made of the same stuff as brains.
If I’m misunderstanding Searle, do enlighten me.
posted by Mark
on January 23, 2008 #
Searle’s argument is this:
- Assume the computer simulation is conscious.
- Then the physics of consciousness (e.g. qualia) would have to “attach” to the computer running the simulation.
- That would mean physics would have to be able to tell the computer was running the simulation.
- But that’s impossible, because there’s no physical distinction between computers running conscious simulations and other random atoms bouncing about — it’s only conscious humans that interpret that series of bouncing atoms as a simulation of a brain.
Do you have a way for the laws of physics to detect sufficiently complicated simulations?
posted by Aaron Swartz
on January 23, 2008 #
P.S. FWIW, much experience has taught me that accosting people after talks with counterarguments is about the worst way of getting an intelligent response.
posted by Aaron Swartz
on January 23, 2008 #
Aaron,
That argument displays the exact circularity I’m talking about. How do the “physics of consciousness” know that they have to “attach” to a brain that is conscious, but not a brain that is not conscious? (Such as one of a person in deep sleep, a coma, recently deceased, or, say, the brain of a mosquito). Do you have a way for the laws of physics to detect sufficiently complicated neural activity?
(This is notwithstanding that there is no reason for anyone to accept the existence of “qualia” in the first place, though I assume that is a premise of Searle’s argument.)
By the way, I didn’t accost Searle, I asked a question during the Q&A session, and it was very much in the context of what he’d been talking about.
posted by Mark
on January 24, 2008 #
How do the laws of physics attach conscious to awake people but not to people who are knocked out? Clearly the brain must be doing something specific to cause consciousness when we’re awake and do it differently or fail to do it when we’re knocked out. This is biological naturalism. Clearly this can’t be the case for simulations, since a computer can be simulated by an appropriate series of ping pong balls and flippers and it seems pretty obvious that those don’t do anything special to cause consciousnes.
You don’t have qualitative experiences? I don’t believe you.
posted by Aaron Swartz
on January 24, 2008 #
Why is it “obvious” that neurons do something to cause consciousness, but obvious that ping pong balls and flippers don’t? This is why I summarized Searle’s argument as “computers can’t be conscious because they aren’t brains”. (pretty much any time you use “clearly” in a philosophical argument, you are using it to mask some unfounded assumption).
I have qualitative experiences. I have no evidence that they are the result of a physical substance that has “attached” to my brain. That part is pure speculation, like the ether of victorian science.
Anyway, the point here is not for us to rehash tired arguments about consciousness. My point was to illustrate that you are making some assumption that makes Searle’s arguments sensical, and Dennett’s nonsensical. I don’t particularly like either of their viewpoints, but I don’t attribute that to their stupidity, I attribute it to their different starting assumptions.
posted by Mark
on January 24, 2008 #
Because people are conscious and ping pong balls are not. Do you disagree?
What’s your alternative?
Yes, you’re right, I’m assuming people are conscious, ping balls aren’t, and that my subjective experiences are connected to my brain. I don’t think most Dennett fans would disagree with any of those, though.
posted by Aaron Swartz
on January 24, 2008 #
Aaron, I’ve mentioned before, I think you’re committing a fallacy of pathological reductionism.
It’s like saying “A picture can’t be made up of tiny little dots, because looking at each little dot, nobody can tell if it’s part of a picture or a just a random blob”.
“A human body can’t be a bunch of atoms, because looking at an atom, nobody can tell if it’s part of a human body or a rock”
Hence - “A brain can’t be a big computer, because looking at each circuit, nobody can tell if it’s part of a human mind or a video game”.
I can’t see how you can reasonably ignore all the evidence from neurology, in favor of a slippery phrased bit of pontification that should be approached with suspicion on vagueness alone.
posted by Seth Finkelstein
on January 24, 2008 #
Aaron,
You’re comparing apples to atoms. Yes, it’s obvious people are conscious and ping pong balls aren’t. But nobody is suggesting that ping pong balls (or transistors), individually, could become conscious. The question is about large collections of them.
A more valid comparison for you to have made would have been “Because neurons are conscious and ping pong balls aren’t. Do you disagree?”, to which I would have answered yes, I disagree, because I don’t think an isolated neuron is (or can be) conscious, any more than I think an isolated transistor can be conscious.
And if you think an isolated neuron is conscious, I don’t see how you can defend that position by any means other than repeated assertion, since no isolated neuron has ever been observed to displayed any sign of consciousness.
posted by Mark
on January 24, 2008 #
I don’t think an isolated neuron is conscious.
posted by Aaron Swartz
on January 24, 2008 #
I think that the term “physics of conciousness” is rather dubious. Are there any solid results from the “physics of conciousness” or are they some philosophical conjectures dressed up as physics?
posted by Jeremy Corbett
on January 26, 2008 #
weighing into the consciousness debate…
Fodor had a nice response to Searle. Imagine putting a little metal wrapper on each axon, and another one on each dendrite. This wrapper mediates all neuron-to-neuron connections. A signal travels down a neuron, hits a wrapper, the wrapper passes neurotransmitter-simulator to the other wrapper, which interprets it and sends the corresponding signal on the next neuron. One would think that, even if all of our neurons were wrapped this way, we’d still be conscious. It’s hard to imagine why not; the same stuff is going on in our neurons; only the space between them has some mediating interaction.
But once we start mediating our neuronal interactions with a neurotransmitter-simulator, we can imagine replacing just about everything in our brain with simulators made of metal (or ping-pong-ball plastic or whatever you like), and as long as all these things are performing functions that are isomorphic to brain functions in the right way, we’d be conscious. So machines with such brains would also be conscious.
And anyway, biological naturalism doesn’t necessarily follow from the Chinese Room computers-can’t-be-conscious argument. If Searle’s right that the Chinese Room is in fact isomorphic enough to what the computer (or any turing machine) is doing to rule out conscious computers, then maybe the brain isn’t a turing machine. Then we need to find out what it is and make a machine like it.
posted by Liron Greenstein
on February 1, 2008 #
Aaron
“1. In some deterministic worlds there are avoiders avoiding harms.”
Doesn’t the logic fail here itself? The implicit assumption that avoiders avoid harms means they are “conscious” of harm to avoid it.
Cheers,
Arunn
posted by Arunn
on February 11, 2008 #
I don’t know man. When I was immersed in the book the argument was subtle and thought provoking. I guess more things are involved here for you to dismiss it with such hubris. Be careful, you seem to be quite good on the “writing personal attacks on other philosophers” you say to hate of DD.
In any case, cheer up. Seriously. So young and already full of such vitriol?
posted by elzr
on April 5, 2008 #
Hi I kinda think you’ve missed the point of Dennett’s talk of evitability. He doesn’t claim to be providing what you call “Medieval free will” with this example but instead to be showing that the lack of “medieval free will” does not mean that we lack all sorts of free will. The point of Freedom evolves is to show that evitability or free will can exist within a deterministic world and to argue that it is these forms of free will that are the morally important ones.
posted by Harry Farmer
on April 14, 2008 #
Liron writes:
Fodor had a nice response to Searle. Imagine putting a little metal wrapper on each axon, and another one on each dendrite. […] One would think that, even if all of our neurons were wrapped this way, we’d still be conscious. It’s hard to imagine why not; the same stuff is going on in our neurons; only the space between them has some mediating interaction.
Searle responds to this in a later book. He does not think that we’d still be conscious if all our neurons were wrapped this way. Assuming we’d still be conscious under these circumstances is tantamount to assuming biological naturalism is false, since biological naturalism suggests there’s a biological process other than pure neuronal interaction that leads to consciousness.
And anyway, biological naturalism doesn’t necessarily follow from the Chinese Room computers-can’t-be-conscious argument. If Searle’s right that the Chinese Room is in fact isomorphic enough to what the computer (or any turing machine) is doing to rule out conscious computers, then maybe the brain isn’t a turing machine. Then we need to find out what it is and make a machine like it.
Searle explicitly says we probably can make conscious machines; he just says Turing machines aren’t conscious. He agrees with you entirely on this point.
posted by Aaron Swartz
on May 17, 2008 #
As I said, by the physics of consciousness I mean qualia — the physical fact that I have conscious subjective experiences. This is so different from every other physical process we’ve studied that I don’t see how it could exist without its own physical basis.
posted by Aaron Swartz
on May 17, 2008 #
Let me give you a piece of advice, Aaron: When you come across an argument that doesn’t make sense to you, assume that the failure is yours, not the author’s. This applies especially when the author is someone like Daniel Dennett, and it applies especially when other smart people don’t make a fuss about it (or at least no the simple-minded fuss that you make). You’re like a non-physicist who comes across Einstein’s theory of relativity and exclaims: “How could mass possibly be related to energy like that? What an idiot — how could anyone take Einstein seriously?!?” Meanwhile, smart people, educated people, read Einstein and conclude: “Brilliant!”
posted by Phillip Torres
on July 28, 2008 #
Accusing someone of making personal attacks and being dumb, and then attacking him personally followed by a few lines of dumbness is really dumb, so I thought I might as well attack you personally on it.
posted by
on October 1, 2009 #
“Let me give you a piece of advice, Aaron: When you come across an argument that doesn’t make sense to you, assume that the failure is yours, not the author’s. This applies especially when the author is someone like Daniel Dennett, and it applies especially when other smart people don’t make a fuss about it (or at least no the simple-minded fuss that you make). You’re like a non-physicist who comes across Einstein’s theory of relativity and exclaims: “How could mass possibly be related to energy like that? What an idiot — how could anyone take Einstein seriously?!?” Meanwhile, smart people, educated people, read Einstein and conclude: “Brilliant!” “
Oh yeah. Now we have Ayatollah Daniel Dennett. Wow on you if you use your own brains and
point out a few of his blunders, because his followers are Smart and Educated People. What a brilliant argument.
posted by qualia
on September 22, 2010 #
You can also send comments by email.
Comments
I don’t get the problem. What part of that don’t you understand?
posted by Jamie McCarthy on January 19, 2008 #
I guess to be more precise, I should ask where you think the contradiction is. Explaining that more carefully will help me identify more precisely what it is you don’t understand. (Your implication that Mickey Mouse cartoons are a good example suggests to me you’re confused about intentionality and perspective, I guess, but I can’t say for sure. Partly because the prior sentence doesn’t scan.)
posted by Jamie McCarthy on January 19, 2008 #
I thought it was fairly obvious. The flaw in the argument is line 3, at least assuming we are using inevitable in its traditional sense in such discussions.
Basically, the fact the fact that an object in a deterministic world is avoidable in the sense that other deterministic-world objects appear to avoid it, does not mean it is evitable, in the sense that the deterministic object could do otherwise. Running through the argument with Mickey Mouse:
While the cartoon wall was, in principle, avoidable, it was inevitable that the deterministic Mickey Mouse would walk into it.
posted by Aaron Swartz on January 19, 2008 #
Aaron,
I’d expect you to know better than to waste time arguing about definitions with philosophers. Exactly how we define vague concepts like free will, evitability, or avoidability, doesn’t really matter, since there isn’t much to build on those definitions. They just don’t seem to fit into a rigorous logical framework, so let’s shrug, and focus on things that do.
Is there other work of Dennett’s that frustrates you? In particular, what do you think of his theories about the Darwinian evolution of memes?
Cheers, Andrey
posted by Andrey Fedorov on January 19, 2008 #
“Free will” as most non-philosphers mean, is like Medival free-will. Here, “agents” are the pan-ultimate cause of their actions and they thus could not be predicted with a machine that knows everything about the present (arguably, then, the future is not entirely knowable).
“Free will” as “compatibalist” call it, is about the the will (attempt to perform actions) as being “unconstrained”. This is called “Hobbian free-will”.
-For the keen physicalists (where qualia are are subset of the physical, like gravity/fields and such), “free willing” agents don’t feel constrained or oppressed. They feel how they are programmed to: if someone holds a gun to your head and takes your money, you will felt unfree, whereas if you are having fun with some freinds, even though it was determined physically a long time ago, it feels unconstrained. -For the unkeen and foolish physicalists out there (who think robots=humans and CPU chip/AI complexity = consciousness), no coherent definition seems to be given.
So really, “compatibalism” is a misleading and confusing word. Determinism is not compatible with “medival free-will”, but it is compatible with “Hobbian free-will”.
So…what we have here are definition wars as usual ;)
posted by Aaron Schulz on January 19, 2008 #
I’m not in a position to give advice or something, but: Aaron, do you really care? Why, if you do?
posted by ivan kurmanov on January 19, 2008 #
That has to be the most absurd definition of evitable (not to mention harms), I’ve ever seen.
Jamie: please do explain.
posted by Jacob Rus on January 19, 2008 #
I’m with some of the other commentators in that I’ve never really understood why people feel so strongly about free will. Given that a universe with free will is indistiguishable from one without free will, it seems to me not to matter what we call our particular one.
But I did think (despite disliking the whole seventies-retro-feel that the game of life thing brings to the book) that Dennett had a point in the first 50 pages of his book. Humans are open systems, and our dynamics are not determined by knowledge of any small subset of the universe.
posted by tom s. on January 19, 2008 #
I’m no fan of DD, but I would not base my criticism of him on a summary somebody else has written of one of his books. Have you read the actual book? (I haven’t, hence my lack of opinion on the subject).
posted by Björn on January 19, 2008 #
Why would Denett bother to argue that determinism doesn’t imply that everything is un-dodge-able? That’s a claim any incompatibilist would give him. I can’t believe he would be stupid enough to then go on and claim that inevitable-as-undodgeable = inevitable-as-had-to-happen-given-the-laws-of-physics-plus-initial-conditions; But if he doesn’t, then there’s really no point to giving this argument in the first place. Maybe G&S misunderstood whatever he actually said?
posted by Liron Greenstein on January 19, 2008 #
the interesting stakes in this case is that gazzaniga uses dennett’s argument to show that justice and law are relevant in a world of deterministic neurobiology, because our will is the underpinning of all european law.
to the extent that we choose evil without coersion it’s our society’s right and obligation to punish us. we presume our moral decisions occur outside of the context of environmental forces - anything that forces the hand of an individual, even nature, removes personal responsibility and renders the legal system irrelevant. this is why we have coercion defenses and insanity pleas.
the avoider language is important- it’s specifically about avoiding committing crime. it’s really troubling to see a legal scholar grasping at straws like this. i have a roomba that wanders in a random walk around my house, but avoids falling down stairs due to a sensor. dennett and gazzaniga seem to be saying by extension, because of the existence and observed behavior of my robot, robots that do fall down stair have chosen to fall. therefore if falling were a crime, they could be prosecuted for it. of course, there’s no will here, there’s just a sensor, which is present or not, and working or not. these men have nothing based in the science and the medicine, merely defective logic based on knowing little about cellular automata, and a kind of willful semantic ignorance of the language they are using.
i don’t think the argument for the legal system is as weak as gazzaniga makes it, but it points to medical research being highly disruptive to society’s ideas of justice in the next few decades. i hope better arguments than dennett’s are put to the task.
posted by q on January 19, 2008 #
I don’t find this argument to be much dumber than Searle’s argument that computers can’t be conscious because they aren’t “brains” (a brain, of course, being that which is conscious) - whatever the reason why you fail to see the stupidity of Searle’s argument, people probably fail to see the stupidity of Dennett’s argument for the same (or a similar) reason.
posted by Mark on January 19, 2008 #
It seems evident that people, including Aaron, care about this because it’s another way of posing the question of (my paraphrasing) “Do we have mystical souls of a divine nature (free will), or are we mere machines no different from computers (determinism)?”
We are like computers == determinism We are transcend physicality == free will
posted by Seth Finkelstein on January 20, 2008 #
a) yes, the pun is terrible. b) Andrey’s comment about wasting time “arguing about definitions with philosophers” is somewhat disconcerting. That sort of thinking goes a long way toward convincing me to not buy another philosophy book outside the classics, ever. c) You all need to go contemplate the consequences of quantum mechanics and thermal physics (especially Brownian motion) on synapses and other cellular ligand-receptor systems. You should also contemplate how neurons, or small systems of neurons, will likely prove to be Turing complete. One example of how cells could be Turing complete (1) control: the universal DNA-RNA-protein dogma, (2) program: the unique cellular machinery of each neuron, (3) data: synaptic and interstitial inputs that stimulate the machinery to do things that lead to other outputs. Also contemplate the lossy nature of this biological computer: entropy, leaky integration, thermal losses, all conspire to insert a fair amount of random data into to Turing machines.
Along those lines, explain why psychoactive drugs work. I guarantee a person on clozaril and haldol will have a different set of thoughts one hour after the injections that they would if they’ hadn’t taken the injections.
I’m pretty sure that once the dust settles, maybe in this century, we’ll find that while the neurons, or small sets of neurons, may be turing complete, there’s simply to much quantum randomness in the brain (synaptic and interstitial) to advance any simple deterministic model, yet at the same time, it will be difficult to argue that the mind transcends the physical processes that occur within. If anything, we will probably extend the entity which is the person to include the whole person. You lose your pinky toe, you’ll have different thoughts. You may still be able to work on high level math or debate the merits of the death penalty, but if you loose part of your pancreas, or a kidney, the chemistry of the interstitial fluid in your brain will be oh so slightly different, or will degrade more quickly over time. Something will change in your thinking.
In a very real way, you can manipulate your own future thoughts be manipulating your current environment. If you start buying fewer processed snack foods then you’ll be less likely to think about weight loss programs in 20 years. Of course, you’ve been manipulating your own thoughts for years. Or have you? Where did the first manipulations come from? Or were those simply the condensations of many random events that piled up over time, influenced, remember, by environmental stimuli too, environmental stimuli that have been effecting your outcomes all the way since your mother’s eggs stopped reproducing while she was in utero, leading to decisions her body made about which oocyte follicles would mature, and your father’s net pH the night you were conceived, leading to a subtle shift in which spermatocytes could gain entry to the maternal oocyte, on which night, and whether or not she drank that night, which might affect, slightly, when your neural tube closed, and some weeks later, how much glucose she had on board when it was time for your heart to start beating … did it start a few minutes later, slowing the rate of division of neurons in your brain for a few moments?
While the whole thing may be deterministic, after allowing for quantum mechanics and initial conditions, the machines are so numerous that the system is so complex that the only functional way to get through life is to act on what we perceive to be free will. The Green’s fMRI results of the trolley problem hint at this.
posted by Niels Olson on January 20, 2008 #
Mark: Maybe because that’s not Searle’s argument.
posted by Aaron Swartz on January 20, 2008 #
Aaron,
It’s much more honest to do another post professing your love of Searle then to call Dennett dumb. As has been pointed out, you have probably done very little to read any Dennett without an agenda going in. Dennett has written books which do have content that is not personal attacks, e.g. many people familar with his recent work Breaking the Spell.
There is a temptation to suggest, and I will succomb to it, that you are embarrassed by how much you are an advocate of Searle. Your post does not mention the name but the “dsquared” quote that you dug from 2002 did not finish this sentence:
…Dennett had been pretty (no, make that very) badly behaved when it came to falsely attributing strawman positions to John Searle, and not admitting it when corrected.
Now someone has brought up the question of why you are so concerned about the issues of determinism and free will. I am curious about this also. In your article about the game you leapt rather quickly to the notion that we are so much more than machine. What are you so worried about? Determinism versus free will is an old concern and lots of people have moved on to more reasonable debates. From my point of view, we know its far too hard to predict relatively simple phenomenon (on our scale length, ignore the fact that quantum mechanics is probablistic) compared with humans. Why bash Dennett when all you want to say is that you are a real boy?
posted by Jeremy Corbett on January 20, 2008 #
I disliked Dennett long before I’d read anything by Searle, so I do not think your explanation is true. It’s true that Searle is something of the anti-Dennett, but I think that’s more of an explanation of why I like Searle than why I dislike Dennett.
I’m not worried about free will (in this piece); this just happens to be the example someone showed to me.
posted by Aaron Swartz on January 22, 2008 #
BTW, I think it’s intriguing that nobody (save perhaps Jamie) has tried to defend the argument. Everyone’s either: attacked me personally for one reason or another, attacked the choice of subject (certainly not mine!), etc.
posted by Aaron Swartz on January 22, 2008 #
I’ll repeat Bjorn’s question: Aaron, have you read Freedom Evolves?
posted by Jamie McCarthy on January 22, 2008 #
Aaron, I passed on the argument itself explicity because I don’t think it’s really at issue. I was clear on that. But it’s not really surprising that if you make a blog post along the lines of so-and-so is dumb because of complicated philosophical argument, the most attention is drawn by the so-and-so is dumb part rather than the complicated philosophical argument.
posted by Seth Finkelstein on January 23, 2008 #
Aaron,
I’m not surprised that you disagree with my characterization of Searle’s argument, but I have not been able to understand it in any other way, even after asking him personally about it when I saw him speak several months ago.
I asked him why a computer simulation of a brain couldn’t “be” (or contain) consciousness, and his response, verbatim, was “simulations leave something out”. If his position is that any conceivable simulation of a brain must leave something out, it is hard to imagine what the something must be if not gooey gray and white matter. IOW, computers can’t be conscious because they aren’t made of the same stuff as brains.
If I’m misunderstanding Searle, do enlighten me.
posted by Mark on January 23, 2008 #
Searle’s argument is this:
Do you have a way for the laws of physics to detect sufficiently complicated simulations?
posted by Aaron Swartz on January 23, 2008 #
P.S. FWIW, much experience has taught me that accosting people after talks with counterarguments is about the worst way of getting an intelligent response.
posted by Aaron Swartz on January 23, 2008 #
Aaron,
That argument displays the exact circularity I’m talking about. How do the “physics of consciousness” know that they have to “attach” to a brain that is conscious, but not a brain that is not conscious? (Such as one of a person in deep sleep, a coma, recently deceased, or, say, the brain of a mosquito). Do you have a way for the laws of physics to detect sufficiently complicated neural activity?
(This is notwithstanding that there is no reason for anyone to accept the existence of “qualia” in the first place, though I assume that is a premise of Searle’s argument.)
By the way, I didn’t accost Searle, I asked a question during the Q&A session, and it was very much in the context of what he’d been talking about.
posted by Mark on January 24, 2008 #
How do the laws of physics attach conscious to awake people but not to people who are knocked out? Clearly the brain must be doing something specific to cause consciousness when we’re awake and do it differently or fail to do it when we’re knocked out. This is biological naturalism. Clearly this can’t be the case for simulations, since a computer can be simulated by an appropriate series of ping pong balls and flippers and it seems pretty obvious that those don’t do anything special to cause consciousnes.
You don’t have qualitative experiences? I don’t believe you.
posted by Aaron Swartz on January 24, 2008 #
Why is it “obvious” that neurons do something to cause consciousness, but obvious that ping pong balls and flippers don’t? This is why I summarized Searle’s argument as “computers can’t be conscious because they aren’t brains”. (pretty much any time you use “clearly” in a philosophical argument, you are using it to mask some unfounded assumption).
I have qualitative experiences. I have no evidence that they are the result of a physical substance that has “attached” to my brain. That part is pure speculation, like the ether of victorian science.
Anyway, the point here is not for us to rehash tired arguments about consciousness. My point was to illustrate that you are making some assumption that makes Searle’s arguments sensical, and Dennett’s nonsensical. I don’t particularly like either of their viewpoints, but I don’t attribute that to their stupidity, I attribute it to their different starting assumptions.
posted by Mark on January 24, 2008 #
Because people are conscious and ping pong balls are not. Do you disagree?
What’s your alternative?
Yes, you’re right, I’m assuming people are conscious, ping balls aren’t, and that my subjective experiences are connected to my brain. I don’t think most Dennett fans would disagree with any of those, though.
posted by Aaron Swartz on January 24, 2008 #
Aaron, I’ve mentioned before, I think you’re committing a fallacy of pathological reductionism.
It’s like saying “A picture can’t be made up of tiny little dots, because looking at each little dot, nobody can tell if it’s part of a picture or a just a random blob”.
“A human body can’t be a bunch of atoms, because looking at an atom, nobody can tell if it’s part of a human body or a rock”
Hence - “A brain can’t be a big computer, because looking at each circuit, nobody can tell if it’s part of a human mind or a video game”.
I can’t see how you can reasonably ignore all the evidence from neurology, in favor of a slippery phrased bit of pontification that should be approached with suspicion on vagueness alone.
posted by Seth Finkelstein on January 24, 2008 #
Aaron,
You’re comparing apples to atoms. Yes, it’s obvious people are conscious and ping pong balls aren’t. But nobody is suggesting that ping pong balls (or transistors), individually, could become conscious. The question is about large collections of them.
A more valid comparison for you to have made would have been “Because neurons are conscious and ping pong balls aren’t. Do you disagree?”, to which I would have answered yes, I disagree, because I don’t think an isolated neuron is (or can be) conscious, any more than I think an isolated transistor can be conscious.
And if you think an isolated neuron is conscious, I don’t see how you can defend that position by any means other than repeated assertion, since no isolated neuron has ever been observed to displayed any sign of consciousness.
posted by Mark on January 24, 2008 #
I don’t think an isolated neuron is conscious.
posted by Aaron Swartz on January 24, 2008 #
I think that the term “physics of conciousness” is rather dubious. Are there any solid results from the “physics of conciousness” or are they some philosophical conjectures dressed up as physics?
posted by Jeremy Corbett on January 26, 2008 #
weighing into the consciousness debate… Fodor had a nice response to Searle. Imagine putting a little metal wrapper on each axon, and another one on each dendrite. This wrapper mediates all neuron-to-neuron connections. A signal travels down a neuron, hits a wrapper, the wrapper passes neurotransmitter-simulator to the other wrapper, which interprets it and sends the corresponding signal on the next neuron. One would think that, even if all of our neurons were wrapped this way, we’d still be conscious. It’s hard to imagine why not; the same stuff is going on in our neurons; only the space between them has some mediating interaction.
But once we start mediating our neuronal interactions with a neurotransmitter-simulator, we can imagine replacing just about everything in our brain with simulators made of metal (or ping-pong-ball plastic or whatever you like), and as long as all these things are performing functions that are isomorphic to brain functions in the right way, we’d be conscious. So machines with such brains would also be conscious.
And anyway, biological naturalism doesn’t necessarily follow from the Chinese Room computers-can’t-be-conscious argument. If Searle’s right that the Chinese Room is in fact isomorphic enough to what the computer (or any turing machine) is doing to rule out conscious computers, then maybe the brain isn’t a turing machine. Then we need to find out what it is and make a machine like it.
posted by Liron Greenstein on February 1, 2008 #
Aaron
“1. In some deterministic worlds there are avoiders avoiding harms.”
Doesn’t the logic fail here itself? The implicit assumption that avoiders avoid harms means they are “conscious” of harm to avoid it.
Cheers, Arunn
posted by Arunn on February 11, 2008 #
I don’t know man. When I was immersed in the book the argument was subtle and thought provoking. I guess more things are involved here for you to dismiss it with such hubris. Be careful, you seem to be quite good on the “writing personal attacks on other philosophers” you say to hate of DD.
In any case, cheer up. Seriously. So young and already full of such vitriol?
posted by elzr on April 5, 2008 #
Hi I kinda think you’ve missed the point of Dennett’s talk of evitability. He doesn’t claim to be providing what you call “Medieval free will” with this example but instead to be showing that the lack of “medieval free will” does not mean that we lack all sorts of free will. The point of Freedom evolves is to show that evitability or free will can exist within a deterministic world and to argue that it is these forms of free will that are the morally important ones.
posted by Harry Farmer on April 14, 2008 #
Liron writes:
Searle responds to this in a later book. He does not think that we’d still be conscious if all our neurons were wrapped this way. Assuming we’d still be conscious under these circumstances is tantamount to assuming biological naturalism is false, since biological naturalism suggests there’s a biological process other than pure neuronal interaction that leads to consciousness.
Searle explicitly says we probably can make conscious machines; he just says Turing machines aren’t conscious. He agrees with you entirely on this point.
posted by Aaron Swartz on May 17, 2008 #
As I said, by the physics of consciousness I mean qualia — the physical fact that I have conscious subjective experiences. This is so different from every other physical process we’ve studied that I don’t see how it could exist without its own physical basis.
posted by Aaron Swartz on May 17, 2008 #
Let me give you a piece of advice, Aaron: When you come across an argument that doesn’t make sense to you, assume that the failure is yours, not the author’s. This applies especially when the author is someone like Daniel Dennett, and it applies especially when other smart people don’t make a fuss about it (or at least no the simple-minded fuss that you make). You’re like a non-physicist who comes across Einstein’s theory of relativity and exclaims: “How could mass possibly be related to energy like that? What an idiot — how could anyone take Einstein seriously?!?” Meanwhile, smart people, educated people, read Einstein and conclude: “Brilliant!”
posted by Phillip Torres on July 28, 2008 #
Accusing someone of making personal attacks and being dumb, and then attacking him personally followed by a few lines of dumbness is really dumb, so I thought I might as well attack you personally on it.
posted by on October 1, 2009 #
“Let me give you a piece of advice, Aaron: When you come across an argument that doesn’t make sense to you, assume that the failure is yours, not the author’s. This applies especially when the author is someone like Daniel Dennett, and it applies especially when other smart people don’t make a fuss about it (or at least no the simple-minded fuss that you make). You’re like a non-physicist who comes across Einstein’s theory of relativity and exclaims: “How could mass possibly be related to energy like that? What an idiot — how could anyone take Einstein seriously?!?” Meanwhile, smart people, educated people, read Einstein and conclude: “Brilliant!” “
Oh yeah. Now we have Ayatollah Daniel Dennett. Wow on you if you use your own brains and point out a few of his blunders, because his followers are Smart and Educated People. What a brilliant argument.
posted by qualia on September 22, 2010 #
You can also send comments by email.