I'm not sure which is funnier: that he thinks that a scientific theory could possibly be disproved by a "moral argument", or that he thinks that biologists believe that "various races of humans may be more evolved than other races".
Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts
Friday, April 05, 2013
Doug Groothuis Demonstrates His Intellect Again
Everybody laughed when creationist Ray Comfort thought "bibliophile" was an insult. But it was hardly the stupidest thing said by a creationist this week. I nominate this gem from "Douglas Groothuis, Ph. D.".
Labels:
creationism,
Doug Groothuis,
philosophy,
stupidity
Friday, April 27, 2012
Friday Moose Blogging
Scientist at Work talks about moose populations in Isle Royale:
"Do they wonder why they suffer? Do they linger a few moments longer before getting up again and then sigh before plowing through the snow for another bout of foraging? Moose certainly have thoughts, and some we understand — the fear of being chased by a wolf, the pleasure of eating fresh blue-bead lilies in the spring. But our knowledge about the content of most moose thoughts — thoughts that are as real as any of my mine — lie at the fuzzy boundary between inference and imagination."
"Do they wonder why they suffer? Do they linger a few moments longer before getting up again and then sigh before plowing through the snow for another bout of foraging? Moose certainly have thoughts, and some we understand — the fear of being chased by a wolf, the pleasure of eating fresh blue-bead lilies in the spring. But our knowledge about the content of most moose thoughts — thoughts that are as real as any of my mine — lie at the fuzzy boundary between inference and imagination."
Labels:
intelligence,
moose,
philosophy,
thinking
Thursday, September 08, 2011
Robots as Companions
Here's an interview with Sherry Turkle, originally released back in April, but replayed yesterday.
For me, here was the most interesting exchange:
Nora Young: "So if we imagine a future where we have robotic companions, the way we now have Roomba vacuum cleaners and Furbies, what's the problem with transferring our idea of companionship to things that aren't actually alive, what's at risk of us losing?"
Sherry Turkle: "Well, these are companions that don't understand the meaning of our experiences, so it forces us to confront what is the meaning of a companion. It's like saying, 'I'm having a conversation with a robot.' Well, you have to say to yourself, 'You've forgotten the meaning of a human conversation, if you think a conversation is something you can have with a robot.'"
Now, I understand that an interview like this is necessarily shallow, and I haven't read Turkle's latest book on the subject. But still, this interview seems to suggest a real misunderstanding on Turkle's part.
Yes, when we interact with technology that mimics living creatures, we run the risk of having an overly-optimistic mental model about how much the technology "understands" us. That's the lesson of ELIZA. But in terms of "companionship", many of our companions fail to understand us, in exactly the same way.
When you tell your troubles to your dog, how much do you think your dog understands? A little bit, obviously -- a dog can pick up on your mood and react appropriately. But it seems unlikely a dog will "understand" the details that your best friend just died of AIDS, or that your latest book got a bad review, or that your spouse just walked out on you. Nevertheless, a dog can be a great companion. Why is a living dog a legitimate companion, and a robot dog not?
Even when we interact with other people, they will often listen and express sympathy (and we will happily receive their sympathy and feel comforted by it) without really understanding. As children, we had our crises that were beyond our parents' understanding. And now, as a parent, my children have emotional lives that are largely hidden from me. Yet we can comfort each other, and be good companions, without the deep understanding that Turkle seems to think is required.
Turkle seems to have a mental model of "understanding" that is too black-and-white. Just as, in the famous words of McCarthy and Dennett, a thermostat can be said to have "beliefs", so too can animals and robots have "understanding" of our experiences and needs. Here, by "understanding", I mean that animals, young children, and robots have limited models of us that suffice to provide the appropriate responses to comfort us. A dog can come and lick your face or curl up with you. A child can come sit in your lap. A robot can commiserate by asking what's wrong, or saying it's sorry to hear about our troubles, or even make the right facial expression.
I think it's foolish to obsess about what such a robot "understands". For, after all, we can do the same thing with dogs and young children. How much do they "really" understand of our troubles? Less than an adult human, probably, but the experience is not necessarily worthless despite this lack of understanding.
When Chuck paints a face on a volleyball and makes it his companion in the movie Cast Away, nobody stands up and says, "You idiot! That's just a ball with a dumb face on it." We don't say that, because we understand what loneliness is like and the value of companionship. When Wilson falls overboard later in the movie, we understand why Chuck is so devastated.
I know the value of human conversation, but I still think you can have a conversation with a robot. As I said, I admit there's a danger in overestimating how much a robot understands about us. But children who have grown up with technology have a better understanding of the limitations than those adults who were fooled by ELIZA decades ago. They're not going to be fooled in the same way. Already, as Turkle points out, they've constructed a new category for things like Furby, which is "alive enough". And furthermore, the technology will improve, so that future robots will have better and better models of what humans are like. As they do so, they will become better companions, and questions about whether they "really" understand will simply seem ... quaint.
For me, here was the most interesting exchange:
Nora Young: "So if we imagine a future where we have robotic companions, the way we now have Roomba vacuum cleaners and Furbies, what's the problem with transferring our idea of companionship to things that aren't actually alive, what's at risk of us losing?"
Sherry Turkle: "Well, these are companions that don't understand the meaning of our experiences, so it forces us to confront what is the meaning of a companion. It's like saying, 'I'm having a conversation with a robot.' Well, you have to say to yourself, 'You've forgotten the meaning of a human conversation, if you think a conversation is something you can have with a robot.'"
Now, I understand that an interview like this is necessarily shallow, and I haven't read Turkle's latest book on the subject. But still, this interview seems to suggest a real misunderstanding on Turkle's part.
Yes, when we interact with technology that mimics living creatures, we run the risk of having an overly-optimistic mental model about how much the technology "understands" us. That's the lesson of ELIZA. But in terms of "companionship", many of our companions fail to understand us, in exactly the same way.
When you tell your troubles to your dog, how much do you think your dog understands? A little bit, obviously -- a dog can pick up on your mood and react appropriately. But it seems unlikely a dog will "understand" the details that your best friend just died of AIDS, or that your latest book got a bad review, or that your spouse just walked out on you. Nevertheless, a dog can be a great companion. Why is a living dog a legitimate companion, and a robot dog not?
Even when we interact with other people, they will often listen and express sympathy (and we will happily receive their sympathy and feel comforted by it) without really understanding. As children, we had our crises that were beyond our parents' understanding. And now, as a parent, my children have emotional lives that are largely hidden from me. Yet we can comfort each other, and be good companions, without the deep understanding that Turkle seems to think is required.
Turkle seems to have a mental model of "understanding" that is too black-and-white. Just as, in the famous words of McCarthy and Dennett, a thermostat can be said to have "beliefs", so too can animals and robots have "understanding" of our experiences and needs. Here, by "understanding", I mean that animals, young children, and robots have limited models of us that suffice to provide the appropriate responses to comfort us. A dog can come and lick your face or curl up with you. A child can come sit in your lap. A robot can commiserate by asking what's wrong, or saying it's sorry to hear about our troubles, or even make the right facial expression.
I think it's foolish to obsess about what such a robot "understands". For, after all, we can do the same thing with dogs and young children. How much do they "really" understand of our troubles? Less than an adult human, probably, but the experience is not necessarily worthless despite this lack of understanding.
When Chuck paints a face on a volleyball and makes it his companion in the movie Cast Away, nobody stands up and says, "You idiot! That's just a ball with a dumb face on it." We don't say that, because we understand what loneliness is like and the value of companionship. When Wilson falls overboard later in the movie, we understand why Chuck is so devastated.
I know the value of human conversation, but I still think you can have a conversation with a robot. As I said, I admit there's a danger in overestimating how much a robot understands about us. But children who have grown up with technology have a better understanding of the limitations than those adults who were fooled by ELIZA decades ago. They're not going to be fooled in the same way. Already, as Turkle points out, they've constructed a new category for things like Furby, which is "alive enough". And furthermore, the technology will improve, so that future robots will have better and better models of what humans are like. As they do so, they will become better companions, and questions about whether they "really" understand will simply seem ... quaint.
Saturday, April 30, 2011
Craig: If God Kills Kids, It's OK
Fundamentalists say the darndest things!
But they're not always pretty. Here we have the truly appalling spectacle of William Lane Craig justifying genocide. It's OK, he says, if God does it.
And - believe it or not - Craig is actually a respected Christian philosopher. Doesn't it make you wonder about what one would have to do to lose respect?
Craig is fond of syllogisms, so here's one just for him:
1. All sane beings agree that genocide is wrong.
2. The Christian god thinks genocide is just peachy.
3. Therefore...
But they're not always pretty. Here we have the truly appalling spectacle of William Lane Craig justifying genocide. It's OK, he says, if God does it.
And - believe it or not - Craig is actually a respected Christian philosopher. Doesn't it make you wonder about what one would have to do to lose respect?
Craig is fond of syllogisms, so here's one just for him:
1. All sane beings agree that genocide is wrong.
2. The Christian god thinks genocide is just peachy.
3. Therefore...
Thursday, April 28, 2011
Review of Monton's "Seeking God in Science"
Bradley Monton is a philosophy professor at UC Boulder and a self-proclaimed atheist. He's written a little book (147 pages for the main text, not counting the preface, endnotes, index, etc.) entitled Seeking God in Science: An Atheist Defends Intelligent Design, which I finally had a chance to read. It consists of four chapters:
I'm afraid this book is not very good. Monton comes off as rather naive (displaying little understanding of the abundant and documented dishonesty in the ID movement) and ignorant of science, the history of intelligent design creationism, and its role in the creationism-evolution wars.
The first chapter of the book is devoted to one of my least favorite philosophical games: trying to create a definition for a concept that covers all possible cases, by starting with a definition and iteratively refining it. He spends 25 pages (pages 16-40) playing this game with the concept of "intelligent design" itself, in a tedious and unenlightening way (for example, he even addresses the possibility that God is biologically related to humans!) and here is what he comes up with (italics in original):
"The theory of intelligent design holds that certain global features of the universe provide evidence for the existence of an intelligent cause, or that certain biologically innate features of living things provide evidence for the doctrine that the features are the result of the intentional actions of an intelligent cause which is not biologically related to the living things, and provide evidence against the doctrine that the features are the result of an undirected process such as natural selection."
Now I don't particularly like this game (although it has a long history -- philosophers have enjoyed applying it to "chair", for example), because for almost any definition proposed it is easy to come up with some outlandish counterexample. Still, as a mathematician, I enjoy and admire precision, so perhaps it's not a game completely without value. But after reading his definition I could only mutter, All that work! - and he still has an imprecise and unusable mess.
Unusable, since key terms like "intelligent cause" and "undirected process" are not defined or made rigorous. Could it be, as other commentators have already observed, that when we try to define "intelligent cause" we discover that natural selection itself could be considered intelligent by our criteria? Could it be that intelligence is a continuous measure, not a discrete quality, so that speaking of an "intelligent cause" is essentially meaningless unless the amount of intelligence is quantified?
Mess, because by calling intelligent design a "theory", Monton begs the question.
Imprecise, because this definition doesn't cover much of what the intelligent design advocates themselves discuss. For example, in Dembski's book No Free Lunch, he spends a good 10 pages discussing the case of Nicholas Caputo, an election official accused of rigging elections. Dembski implies that his intelligent design methodology can help resolve the case of whether Caputo cheated. But this case has nothing to do with a "global feature" of the universe or a "biologically innate" feature of living things.
Monton seems rather naive about the intelligent design movement. For example, on page 12, he claims, "As a matter of public policy, the Discovery Institute opposes any effort to require the teaching of intelligent design by school districts or state boards of education." But this claim could only be made by someone who doesn't understand (a) that the Discovery Institute has a long history of dissembling and (b) that intelligent design, as practiced by its leading proponents (Behe, Dembski, Meyer) is largely a negative program of casting doubt on the theory of evolution, or examining its supposed deficiencies. Therefore, Discovery Institute programs like "Teach the Controversy" and "Critical Analysis of Evolution" are, in fact, just covers for getting intelligent design into the classroom. This is abundantly clear to most people who have studied the intelligent design movement in any depth.
Part of the book is devoted to analyzing the views of pro-science philosophers, such as Taner Edis, Massimo Pigliucci, and Robert Pennock. Needless to say, Monton thinks they have it wrong in many ways; they are "sloppy" and "confused". But much of his criticism seems misplaced. For example, he gives the following advice to Barbara Forrest: she "focuses too much on attacking the proponents of intelligent design for the supposed cultural beliefs they have, instead of attacking the arguments for intelligent design that the proponents of intelligent design give". But Forrest has never said that intelligent design advocates are wrong because of their cultural beliefs; rather, she has fearlessly and tirelessly explored the goals and strategies of intelligent designers, as well as the sociological and political connections between intelligent design creationism and the religious right. Monton is apparently unconcerned with these details, and that's his right. But then his criticism amounts to "I don't share your interests", and that's rather pathetic.
Monton is a fan of Laudan, citing the following passage approvingly: "If we would stand up and be counted on the side of reason, we ought to drop terms like "pseudo-science" and "unscientific" from our vocabulary; they are just hollow phrases which do only emotive work for us." I strongly disagree. As I mentioned already, almost any definition or classification is subject to exceptions, but it's still useful to be able to say something is a chair or not a chair, even if we cannot always agree about the boundaries. Science, as a social process, has a number of characteristics, and it is perfectly legitimate and useful to point out that creationism and its modern variant, intelligent design, fail to share many of these characteristics.
There are signs that although he thinks intelligent design merits a book-length treatment, Monton hasn't really grappled with the issues. For example, on page 17, he cites a beehive as an example of a feature of the universe that "indisputably exist[s] as a result of an intelligent cause" and then, in a footnote, says that "It was surprising to me that some readers objected to this line of thought, saying that ... bees ... aren't intelligent." Well, I'd guess that this surprise comes largely from the fact that Monton hasn't really thought deeply about what intelligence is. We now know a lot about algorithms and naturally-occurring tools to perform computational processes, but Monton doesn't seem to know anything about it. But then he has some real misconceptions about mathematics and computing, claiming that computers can't represent irrational numbers. (I've addressed this misconception here.)
Several times in the book, Monton refers to the Newtonian account of physics and argues it has been "refuted". On page 50, he uses it to argue that false scientific theories can still count as science (so if intelligent design has been refuted, it could still be considered science). On page 152, he uses it to argue that false scientific theories are routinely taught in high school science (so intelligent design, even if false, could still be caught). But this black-white classification of theories as either "false" or "not false" doesn't even come close to capturing the status of Newtonian physics. Yes, it doesn't give the right answers for particles moving at high velocity, for example. But I can't think of a single scientific theory that unfailingly predicts the outcome of every single experiment. It is more correct, I think, to view theories and equations as our models of reality and to have a good idea of their shortcomings and applicability. No one uses special relativity to solve simple problems in kinematics; they use Newton and they don't apologize for it. If we classify theories purely as "true" or "false" then we lose the nuance that some "false" theories are pretty damn good and others are worthless.
Chapter 3 summarizes some of the arguments of intelligent design advocates, such as alleged "fine-tuning", the origin of the universe, the origin of life, irreducible complexity, and the simulation argument. There is not really much analysis that is new here, but I found his discussion of the simulation argument the most interesting part of the book.
Chapter 4 addresses the question of whether intelligent design should be taught in school. By "taught in school", Monton means "taught in public high-school science classes" (although he takes two whole pages to explain this - an example of how clunky the writing is). One of the objections Monton addresses is "we wouldn't be teaching a real controversy", and he answers this by citing Michael Behe as an example of a real scientist who disagrees with the scientific consensus. Ergo, there is a real controversy. But if Monton's definition of "real controversy" is "one scientist disagrees" or even "a handful of scientists disagree", then there is a "real controversy" about relativity, heliocentrism, and the germ theory of disease. Indeed, it would be hard to come up with a scientific theory for which there is no controversy in Monton's sense. Monton's position is absurd. There are controversies, and then there are controversies; it's not a black-and-white term. The "controversy" over evolution is exactly like that over relativity: a very small number of experts in the field, and a larger number of cranks, disagree with current consensus. That doesn't mean their objections merit coverage in science class. I'm not opposed to teaching controversies, but let's teach some real ones.
Finally, I'd say that the book, and Monton himself, seems curiously disengaged from the extensive mainstream criticism of intelligent design. To give one illustration, he doesn't cite much of the literature arguing against the claims of intelligent design advocates. Nowhere will you find any mention of, for example, the fine article of Pallen and Matzke (published in 2006 in Nature Reviews Microbiology) -- although other articles of Matzke are cited -- or the article of Wilkins and Elsberry (published in 2001 in Biology and Philosophy). He lists two conferences where he's presented his work, and both of them were hosted by the "Society of Christian Philosophers". Four people are listed as endorsers on the back of his book, and three of them are non-biologist critics of evolution (Berlinski, Dembski, Groothuis). And Monton has a blog, but he doesn't allow any comments on it. I can't help but think Monton's book would have been much better if he had made more attempts to be engaged with those who disagree with him.
- What is intelligent design, and why might an atheist believe in it?
- Why it is legitimate to treat intelligent design as science
- Some somewhat plausible intelligent design arguments
- Should intelligent design be taught in school?
I'm afraid this book is not very good. Monton comes off as rather naive (displaying little understanding of the abundant and documented dishonesty in the ID movement) and ignorant of science, the history of intelligent design creationism, and its role in the creationism-evolution wars.
The first chapter of the book is devoted to one of my least favorite philosophical games: trying to create a definition for a concept that covers all possible cases, by starting with a definition and iteratively refining it. He spends 25 pages (pages 16-40) playing this game with the concept of "intelligent design" itself, in a tedious and unenlightening way (for example, he even addresses the possibility that God is biologically related to humans!) and here is what he comes up with (italics in original):
"The theory of intelligent design holds that certain global features of the universe provide evidence for the existence of an intelligent cause, or that certain biologically innate features of living things provide evidence for the doctrine that the features are the result of the intentional actions of an intelligent cause which is not biologically related to the living things, and provide evidence against the doctrine that the features are the result of an undirected process such as natural selection."
Now I don't particularly like this game (although it has a long history -- philosophers have enjoyed applying it to "chair", for example), because for almost any definition proposed it is easy to come up with some outlandish counterexample. Still, as a mathematician, I enjoy and admire precision, so perhaps it's not a game completely without value. But after reading his definition I could only mutter, All that work! - and he still has an imprecise and unusable mess.
Unusable, since key terms like "intelligent cause" and "undirected process" are not defined or made rigorous. Could it be, as other commentators have already observed, that when we try to define "intelligent cause" we discover that natural selection itself could be considered intelligent by our criteria? Could it be that intelligence is a continuous measure, not a discrete quality, so that speaking of an "intelligent cause" is essentially meaningless unless the amount of intelligence is quantified?
Mess, because by calling intelligent design a "theory", Monton begs the question.
Imprecise, because this definition doesn't cover much of what the intelligent design advocates themselves discuss. For example, in Dembski's book No Free Lunch, he spends a good 10 pages discussing the case of Nicholas Caputo, an election official accused of rigging elections. Dembski implies that his intelligent design methodology can help resolve the case of whether Caputo cheated. But this case has nothing to do with a "global feature" of the universe or a "biologically innate" feature of living things.
Monton seems rather naive about the intelligent design movement. For example, on page 12, he claims, "As a matter of public policy, the Discovery Institute opposes any effort to require the teaching of intelligent design by school districts or state boards of education." But this claim could only be made by someone who doesn't understand (a) that the Discovery Institute has a long history of dissembling and (b) that intelligent design, as practiced by its leading proponents (Behe, Dembski, Meyer) is largely a negative program of casting doubt on the theory of evolution, or examining its supposed deficiencies. Therefore, Discovery Institute programs like "Teach the Controversy" and "Critical Analysis of Evolution" are, in fact, just covers for getting intelligent design into the classroom. This is abundantly clear to most people who have studied the intelligent design movement in any depth.
Part of the book is devoted to analyzing the views of pro-science philosophers, such as Taner Edis, Massimo Pigliucci, and Robert Pennock. Needless to say, Monton thinks they have it wrong in many ways; they are "sloppy" and "confused". But much of his criticism seems misplaced. For example, he gives the following advice to Barbara Forrest: she "focuses too much on attacking the proponents of intelligent design for the supposed cultural beliefs they have, instead of attacking the arguments for intelligent design that the proponents of intelligent design give". But Forrest has never said that intelligent design advocates are wrong because of their cultural beliefs; rather, she has fearlessly and tirelessly explored the goals and strategies of intelligent designers, as well as the sociological and political connections between intelligent design creationism and the religious right. Monton is apparently unconcerned with these details, and that's his right. But then his criticism amounts to "I don't share your interests", and that's rather pathetic.
Monton is a fan of Laudan, citing the following passage approvingly: "If we would stand up and be counted on the side of reason, we ought to drop terms like "pseudo-science" and "unscientific" from our vocabulary; they are just hollow phrases which do only emotive work for us." I strongly disagree. As I mentioned already, almost any definition or classification is subject to exceptions, but it's still useful to be able to say something is a chair or not a chair, even if we cannot always agree about the boundaries. Science, as a social process, has a number of characteristics, and it is perfectly legitimate and useful to point out that creationism and its modern variant, intelligent design, fail to share many of these characteristics.
There are signs that although he thinks intelligent design merits a book-length treatment, Monton hasn't really grappled with the issues. For example, on page 17, he cites a beehive as an example of a feature of the universe that "indisputably exist[s] as a result of an intelligent cause" and then, in a footnote, says that "It was surprising to me that some readers objected to this line of thought, saying that ... bees ... aren't intelligent." Well, I'd guess that this surprise comes largely from the fact that Monton hasn't really thought deeply about what intelligence is. We now know a lot about algorithms and naturally-occurring tools to perform computational processes, but Monton doesn't seem to know anything about it. But then he has some real misconceptions about mathematics and computing, claiming that computers can't represent irrational numbers. (I've addressed this misconception here.)
Several times in the book, Monton refers to the Newtonian account of physics and argues it has been "refuted". On page 50, he uses it to argue that false scientific theories can still count as science (so if intelligent design has been refuted, it could still be considered science). On page 152, he uses it to argue that false scientific theories are routinely taught in high school science (so intelligent design, even if false, could still be caught). But this black-white classification of theories as either "false" or "not false" doesn't even come close to capturing the status of Newtonian physics. Yes, it doesn't give the right answers for particles moving at high velocity, for example. But I can't think of a single scientific theory that unfailingly predicts the outcome of every single experiment. It is more correct, I think, to view theories and equations as our models of reality and to have a good idea of their shortcomings and applicability. No one uses special relativity to solve simple problems in kinematics; they use Newton and they don't apologize for it. If we classify theories purely as "true" or "false" then we lose the nuance that some "false" theories are pretty damn good and others are worthless.
Chapter 3 summarizes some of the arguments of intelligent design advocates, such as alleged "fine-tuning", the origin of the universe, the origin of life, irreducible complexity, and the simulation argument. There is not really much analysis that is new here, but I found his discussion of the simulation argument the most interesting part of the book.
Chapter 4 addresses the question of whether intelligent design should be taught in school. By "taught in school", Monton means "taught in public high-school science classes" (although he takes two whole pages to explain this - an example of how clunky the writing is). One of the objections Monton addresses is "we wouldn't be teaching a real controversy", and he answers this by citing Michael Behe as an example of a real scientist who disagrees with the scientific consensus. Ergo, there is a real controversy. But if Monton's definition of "real controversy" is "one scientist disagrees" or even "a handful of scientists disagree", then there is a "real controversy" about relativity, heliocentrism, and the germ theory of disease. Indeed, it would be hard to come up with a scientific theory for which there is no controversy in Monton's sense. Monton's position is absurd. There are controversies, and then there are controversies; it's not a black-and-white term. The "controversy" over evolution is exactly like that over relativity: a very small number of experts in the field, and a larger number of cranks, disagree with current consensus. That doesn't mean their objections merit coverage in science class. I'm not opposed to teaching controversies, but let's teach some real ones.
Finally, I'd say that the book, and Monton himself, seems curiously disengaged from the extensive mainstream criticism of intelligent design. To give one illustration, he doesn't cite much of the literature arguing against the claims of intelligent design advocates. Nowhere will you find any mention of, for example, the fine article of Pallen and Matzke (published in 2006 in Nature Reviews Microbiology) -- although other articles of Matzke are cited -- or the article of Wilkins and Elsberry (published in 2001 in Biology and Philosophy). He lists two conferences where he's presented his work, and both of them were hosted by the "Society of Christian Philosophers". Four people are listed as endorsers on the back of his book, and three of them are non-biologist critics of evolution (Berlinski, Dembski, Groothuis). And Monton has a blog, but he doesn't allow any comments on it. I can't help but think Monton's book would have been much better if he had made more attempts to be engaged with those who disagree with him.
Labels:
Bradley Monton,
intelligent design,
philosophy
Tuesday, February 15, 2011
'Watson' on Jeopardy
Well, the first episode of 'Watson' on Jeopardy was shown last night. I didn't see it live, but luckily it's available on Youtube, at least for a little while.
It's a great achievement. Question-answering systems are a hot topic now - my colleague Ming Li, for example, has created such a system, based on word associations it finds on the Internet. But Watson is much better than anything I've seen before. A system like Watson will be extremely useful for researchers and libraries. Instead of having to staff general inquiry telephone lines with a person, libraries can use a system like Watson to answer questions of patrons. And, of course, there will be applications like medical diagnoses and computer tech support, too.
I predict, however, that the reaction to Watson will be largely hostile, especially from Mysterian philosophers (like Chalmers), strong AI skeptics (like the Dreyfus brothers), and hardcore conservative theists firmly committed to the special status of humans (like David Gelernter). We'll also hear naysaying from jealous engineers (like this letter from Llewellyn C. Wall, who earns my nomination for Jerk of the Week).
Despite its impressive performance, we're going to hear lots of claims that Watson "doesn't really think". Critics will point gleefully to Watson stumbling on an answer, replying "finis" when the correct response was "terminus" or "terminal" -- as if humans never make a mistake on Jeopardy. We're going to hear columnists stating "But Watson can't smell a rose or compose a poem" - as if that is a cogent criticism of a system designed to answer questions.
I predict none of these naysayers will deal with the real issue: in what essential way does Watson really differ from the way people think? People make associations, too, and answer questions based on sometimes tenuous connections. Vague assertions like "Watson doesn't really think" or "Watson has no mental model of the world" or "Watson is just playing word games" aren't going to cut it, unless critics can come up with a really rigorous formulation of their argument.
Watson is just another nail in the coffin of Strong AI deniers like Dreyfus - even if they don't realize it yet.
Addendum: Ah, I see the moronic critiques are already dribbling in: from Gordon Haff we get typical boilerplate: "Watson is in no real sense thinking and the use of the term "understanding" in the context of Watson should be taken as anthropomorphism rather than a literal description." But without a formal definition of what means to "think" in a "real sense", Haff's claim is just so much chin music.
It's a great achievement. Question-answering systems are a hot topic now - my colleague Ming Li, for example, has created such a system, based on word associations it finds on the Internet. But Watson is much better than anything I've seen before. A system like Watson will be extremely useful for researchers and libraries. Instead of having to staff general inquiry telephone lines with a person, libraries can use a system like Watson to answer questions of patrons. And, of course, there will be applications like medical diagnoses and computer tech support, too.
I predict, however, that the reaction to Watson will be largely hostile, especially from Mysterian philosophers (like Chalmers), strong AI skeptics (like the Dreyfus brothers), and hardcore conservative theists firmly committed to the special status of humans (like David Gelernter). We'll also hear naysaying from jealous engineers (like this letter from Llewellyn C. Wall, who earns my nomination for Jerk of the Week).
Despite its impressive performance, we're going to hear lots of claims that Watson "doesn't really think". Critics will point gleefully to Watson stumbling on an answer, replying "finis" when the correct response was "terminus" or "terminal" -- as if humans never make a mistake on Jeopardy. We're going to hear columnists stating "But Watson can't smell a rose or compose a poem" - as if that is a cogent criticism of a system designed to answer questions.
I predict none of these naysayers will deal with the real issue: in what essential way does Watson really differ from the way people think? People make associations, too, and answer questions based on sometimes tenuous connections. Vague assertions like "Watson doesn't really think" or "Watson has no mental model of the world" or "Watson is just playing word games" aren't going to cut it, unless critics can come up with a really rigorous formulation of their argument.
Watson is just another nail in the coffin of Strong AI deniers like Dreyfus - even if they don't realize it yet.
Addendum: Ah, I see the moronic critiques are already dribbling in: from Gordon Haff we get typical boilerplate: "Watson is in no real sense thinking and the use of the term "understanding" in the context of Watson should be taken as anthropomorphism rather than a literal description." But without a formal definition of what means to "think" in a "real sense", Haff's claim is just so much chin music.
Monday, February 14, 2011
Harris v. Wolpe
This is an oldie, but a goodie: Sam Harris versus David Wolpe:
I think Harris definitely gets the best of Wolpe, although Wolpe's no slouch. There are so many good lines by Harris it's hard to list them all. For example, "We need to cease to reward people for pretending to know things they do not know. And the only area of discourse where we do this is on the subject of God."
What interests me more, though, is Wolpe's utter confusion when it comes to understanding neuroscience (at 44:50):
"The reason that our minds can do something more than just operate on instinct is because we operate all the time with things that are not physical, right: ideas, words... I can say something and change the physiology of your brain. Now how is that unless there's something more to your brain than physiology?"
This is remarkably dim. Ideas and words are not physical? An idea is a certain pattern of our neurophysiology. Spoken words are vibrations of the air. The patterns thus formed are interpreted by the nerves in the ear and are transmitted to the brain as electrical signals. Calling these things "not physical" betrays an ignorant, pre-scientific view of the world.
I wonder where Wolpe thinks ideas reside, if not in the brains of humans and other animals? In some magical ethereal realm?
I can say something and change the physiology of my computer. Heck, if my toaster is hooked up to some voice recognition, I can say something and change the physiology of a piece of bread. How does that imply that there's "something more" to a piece of bread?
I think Harris definitely gets the best of Wolpe, although Wolpe's no slouch. There are so many good lines by Harris it's hard to list them all. For example, "We need to cease to reward people for pretending to know things they do not know. And the only area of discourse where we do this is on the subject of God."
What interests me more, though, is Wolpe's utter confusion when it comes to understanding neuroscience (at 44:50):
"The reason that our minds can do something more than just operate on instinct is because we operate all the time with things that are not physical, right: ideas, words... I can say something and change the physiology of your brain. Now how is that unless there's something more to your brain than physiology?"
This is remarkably dim. Ideas and words are not physical? An idea is a certain pattern of our neurophysiology. Spoken words are vibrations of the air. The patterns thus formed are interpreted by the nerves in the ear and are transmitted to the brain as electrical signals. Calling these things "not physical" betrays an ignorant, pre-scientific view of the world.
I wonder where Wolpe thinks ideas reside, if not in the brains of humans and other animals? In some magical ethereal realm?
I can say something and change the physiology of my computer. Heck, if my toaster is hooked up to some voice recognition, I can say something and change the physiology of a piece of bread. How does that imply that there's "something more" to a piece of bread?
Labels:
philosophy,
religion,
Sam Harris,
silliness
Saturday, May 29, 2010
No Ghost in the Machine
Back when I was a graduate student at Berkeley, I worked as a computer consultant for UC Berkeley's Computing Services department. One day a woman came in and wanted a tour of our APL graphics lab. So I showed her the machines we had, which included Tektronix 4013 and 4015 terminals, and one 4027, and drew a few things for her. But then the incomprehension set in:
"Who's doing the drawing on the screen?" she asked.
I explained that the program was doing the drawing.
"No, I mean what person is doing the drawing that we see?" she clarified.
I explained that the program was written by me and other people.
"No, I don't mean the program. I mean, who is doing the actual drawing, right now?
I explained that an electron gun inside the machine activated a zinc sulfide phosphor, and that it was directed by the program. I then showed her what a program looked like.
All to no avail. She could not comprehend that all this was taking place with no direct human control. Of course, humans wrote the program and built the machines, but that didn't console her. She was simply unable to wrap her mind around the fact that a machine could draw pictures. For her, pictures were the province of humans, and it was impossible that this province could ever be invaded by machines. I soon realized that nothing I could say could rescue this poor woman from the prison of her preconceptions. Finally, after suggesting some books about computers and science she should read, I told her I could not devote any more time to our discussion, and I sadly went back to my office. It was one of the first experiences I ever had of being unable to explain something so simple to someone.
That's the same kind of feeling I have when I read something like this post over at Telic Thoughts. Bradford, one of the more dense commentators there, quotes a famous passage of Leibniz
Suppose that there be a machine, the structure of which produces thinking, feeling, and perceiving; imagine this machine enlarged but preserving the same proportions, so that you could enter it as if it were a mill. This being supposed you might visit its inside; but what would you observe there? Nothing but parts which push and move each other, and never anything which could explain perception.
But Leibniz's argument is not much of an argument. He seems to take it for granted that understanding how the parts of a machine work can't give us understanding of how the machine functions as a whole. Even in Leibniz's day this must have seemed silly.
Bradford follows it up with the following from someone named RLC:
The machine, of course, is analogous to the brain. If we were able to walk into the brain as if it were a factory, what would we find there other than electrochemical reactions taking place along the neurons? How do these chemical and electrical phenomena map, or translate, to sensations like red or sweet? Where, exactly, are these sensations? How do chemical reactions generate things like beliefs, doubts, regrets, certainty, or purposes? How do they create understanding of a problem or appreciation of something like beauty? How does a flow of ions or the coupling of molecules impose a meaning on a page of text? How can a chemical process or an electrical potential have content or be about something?
Like my acquaintance in the graphics lab 30 years ago, poor RLC is trapped by his/her own preconceptions, I don't know what to say. How can anyone, writing a post on a blog which is entirely mediated by things like electrons in wires or magnetic disk storage, nevertheless ask "How can a chemical process or an electrical potential have content or be about something?" The irony is really mind-boggling. Does RLC ever use a phone or watch TV? For that matter, if he/she has trouble with the idea of "electrical potential" being "about something", how come he/she has no trouble with the idea of carbon atoms on a page being "about something"?
We are already beginning to understand how the brain works. We know, for example, how the eye focuses light on the retina, how the retina contains photoreceptors, how these photoreceptors react to different wavelengths of light, and how signals are sent through the optic nerve to the brain. We know that red light is handled differently from green light because different opsins absorb different wavelengths. And the more we understand, the more the brain looks like Leibniz's analogy. There is no ghost in the machine, there are simply systems relying on chemistry and physics. That's it.
To be confused like RLC means that one has to believe that all the chemical and physical apparatus of the brain, which is clearly collects data from the outside world and processes it, is just a coincidence. Sure, the apparatus is there, but somehow it's not really necessary, because there is some "mind" or "spirit" not ultimately reducible to the apparatus.
Here's an analogy. Suppose someone gives us a sophisticated robot that can navigate terrain, avoid obstacles, and report information about what it has seen. We can then take this robot apart, piece by piece. We see and study the CCD camera, the chips that process the information, and the LCD screens. Eventually we have a complete picture of how the robot works. What did we fail to understand by our reductionism?
Our understanding of how the brain works, when it is completed, will come from a complete picture of how all its systems function and interact. There's no magic to it - our sensations, feelings, understanding, appreciation of beauty - they are all outcomes of these systems. And there will still be people like RLC who will sit there, uncomprehending, and complain that we haven't explained anything, saying,
"But how can chemistry and physics be about something?"
"Who's doing the drawing on the screen?" she asked.
I explained that the program was doing the drawing.
"No, I mean what person is doing the drawing that we see?" she clarified.
I explained that the program was written by me and other people.
"No, I don't mean the program. I mean, who is doing the actual drawing, right now?
I explained that an electron gun inside the machine activated a zinc sulfide phosphor, and that it was directed by the program. I then showed her what a program looked like.
All to no avail. She could not comprehend that all this was taking place with no direct human control. Of course, humans wrote the program and built the machines, but that didn't console her. She was simply unable to wrap her mind around the fact that a machine could draw pictures. For her, pictures were the province of humans, and it was impossible that this province could ever be invaded by machines. I soon realized that nothing I could say could rescue this poor woman from the prison of her preconceptions. Finally, after suggesting some books about computers and science she should read, I told her I could not devote any more time to our discussion, and I sadly went back to my office. It was one of the first experiences I ever had of being unable to explain something so simple to someone.
That's the same kind of feeling I have when I read something like this post over at Telic Thoughts. Bradford, one of the more dense commentators there, quotes a famous passage of Leibniz
Suppose that there be a machine, the structure of which produces thinking, feeling, and perceiving; imagine this machine enlarged but preserving the same proportions, so that you could enter it as if it were a mill. This being supposed you might visit its inside; but what would you observe there? Nothing but parts which push and move each other, and never anything which could explain perception.
But Leibniz's argument is not much of an argument. He seems to take it for granted that understanding how the parts of a machine work can't give us understanding of how the machine functions as a whole. Even in Leibniz's day this must have seemed silly.
Bradford follows it up with the following from someone named RLC:
The machine, of course, is analogous to the brain. If we were able to walk into the brain as if it were a factory, what would we find there other than electrochemical reactions taking place along the neurons? How do these chemical and electrical phenomena map, or translate, to sensations like red or sweet? Where, exactly, are these sensations? How do chemical reactions generate things like beliefs, doubts, regrets, certainty, or purposes? How do they create understanding of a problem or appreciation of something like beauty? How does a flow of ions or the coupling of molecules impose a meaning on a page of text? How can a chemical process or an electrical potential have content or be about something?
Like my acquaintance in the graphics lab 30 years ago, poor RLC is trapped by his/her own preconceptions, I don't know what to say. How can anyone, writing a post on a blog which is entirely mediated by things like electrons in wires or magnetic disk storage, nevertheless ask "How can a chemical process or an electrical potential have content or be about something?" The irony is really mind-boggling. Does RLC ever use a phone or watch TV? For that matter, if he/she has trouble with the idea of "electrical potential" being "about something", how come he/she has no trouble with the idea of carbon atoms on a page being "about something"?
We are already beginning to understand how the brain works. We know, for example, how the eye focuses light on the retina, how the retina contains photoreceptors, how these photoreceptors react to different wavelengths of light, and how signals are sent through the optic nerve to the brain. We know that red light is handled differently from green light because different opsins absorb different wavelengths. And the more we understand, the more the brain looks like Leibniz's analogy. There is no ghost in the machine, there are simply systems relying on chemistry and physics. That's it.
To be confused like RLC means that one has to believe that all the chemical and physical apparatus of the brain, which is clearly collects data from the outside world and processes it, is just a coincidence. Sure, the apparatus is there, but somehow it's not really necessary, because there is some "mind" or "spirit" not ultimately reducible to the apparatus.
Here's an analogy. Suppose someone gives us a sophisticated robot that can navigate terrain, avoid obstacles, and report information about what it has seen. We can then take this robot apart, piece by piece. We see and study the CCD camera, the chips that process the information, and the LCD screens. Eventually we have a complete picture of how the robot works. What did we fail to understand by our reductionism?
Our understanding of how the brain works, when it is completed, will come from a complete picture of how all its systems function and interact. There's no magic to it - our sensations, feelings, understanding, appreciation of beauty - they are all outcomes of these systems. And there will still be people like RLC who will sit there, uncomprehending, and complain that we haven't explained anything, saying,
"But how can chemistry and physics be about something?"
Saturday, March 13, 2010
Don't Cite Works You Haven't Read
It's something you teach your graduate students: Don't cite works you haven't read.
It's a rule with good reasons behind it. First, it's a bad idea to rely on someone else's summary of another work. Maybe they summarized it incorrectly, or maybe there is more there you need to consider. Second, as a scholar, it's your obligation not to spread misinformation. Maybe the page numbers or the volume are given incorrectly.
Like all rules, there are occasional exceptions. Maybe it's a really old and obscure work that you've tried to get a copy of, but failed. In that case, you can cite the work but mention that you haven't actually been able to find a copy. (I've done this.) That way, at the least the reader will be warned that you're relying on someone else's citation.
And now, from Paris, comes a spectacular case of why citing works you haven't read is a bad idea. The French philosopher Bernard-Henri Lévi has been caught citing and praising, in his new book De la guerre en philosophie, the work of the philosopher "Jean-Baptiste Botul". Only problem? Botul doesn't actually exist. He is the creation of journalist Frédéric Pagès.
Now, maybe Lévy did actually read Botul's book La vie sexuelle d'Emmanuel Kant. But if so, despite the big warning signs (Botul's school is called "Botulism") he failed to recognize it as a big joke, which raises even more questions about his perspicacity.
Maybe I need to tell my graduate students another rule: Don't cite works that you suspect may be a hoax.
Oh, and for the record? I haven't read Lévi's new book, nor Pagès's satire.
It's a rule with good reasons behind it. First, it's a bad idea to rely on someone else's summary of another work. Maybe they summarized it incorrectly, or maybe there is more there you need to consider. Second, as a scholar, it's your obligation not to spread misinformation. Maybe the page numbers or the volume are given incorrectly.
Like all rules, there are occasional exceptions. Maybe it's a really old and obscure work that you've tried to get a copy of, but failed. In that case, you can cite the work but mention that you haven't actually been able to find a copy. (I've done this.) That way, at the least the reader will be warned that you're relying on someone else's citation.
And now, from Paris, comes a spectacular case of why citing works you haven't read is a bad idea. The French philosopher Bernard-Henri Lévi has been caught citing and praising, in his new book De la guerre en philosophie, the work of the philosopher "Jean-Baptiste Botul". Only problem? Botul doesn't actually exist. He is the creation of journalist Frédéric Pagès.
Now, maybe Lévy did actually read Botul's book La vie sexuelle d'Emmanuel Kant. But if so, despite the big warning signs (Botul's school is called "Botulism") he failed to recognize it as a big joke, which raises even more questions about his perspicacity.
Maybe I need to tell my graduate students another rule: Don't cite works that you suspect may be a hoax.
Oh, and for the record? I haven't read Lévi's new book, nor Pagès's satire.
Friday, January 01, 2010
Free Will Being Challenged
I have thought for a long time that "free will" is an incoherent philosophical concept. I'm not sure one can define it in any reasonable way. It is not simply the capacity for choice, because a machine flipping a coin would achieve the same result. So what is it? For the present, I will assume it refers to our feeling of being "in control".
We all have the sensation of being "in control", but how do we decide whether a biological organism other than us possesses free will? Does a bonobo have it? A dolphin? A cockroach? A bacterium? Can philosophy alone offer any guidance? I don't think so. Samuel Johnson once remarked, "All theory is against the freedom of the will; all experience for it." But we know that our common-sense experience doesn't always match up to the physical world, as in our strange system for perceiving color and how it can be fooled. So simply feeling that we have free will doesn't mean we actually do. Maybe we don't.
I think it quite possible that we lack free will in any reasonable sense - that, in fact, our actions are essentially deterministic. Despite this, I also think that our feeling that we are "in control" has a plausible basis -- I guess this makes me a "compatibilist", like Daniel Dennett. But I have a slightly different take on why, which is probably not original, but which I've never seen discussed in philosophy texts, although someone has probably done so. Namely, I'd guess that our computational hardware and software is so complex that it is not easy to predict the outcome of any situation with high probability - and in particular, we cannot even know how we ourselves will react in any given situation. We probably can do a simulation in principle, but in practice such a simulation would take too much time. So although we don't have free will in actual fact, the unpredictability of our actions makes it appear we do to beings with limited computational resources, such as ourselves. I'm hopeful that the theory of computational complexity may eventually play a role in a generally-accepted solution to the conundrum that has baffled philosophers for centuries.
The experiments of Benjamin Libet and co-authors cast doubt on our perception of being "in control". Libet found that subjects had activity in their brains about 300 milliseconds before they were aware of their volition to press a button. A more recent study found brain activity as much as 10 seconds before subjects were aware of their own conscious decisions. This popular article in Wired addresses it; for more technical details, see the article in Nature Neuroscience.
I was motivated to mention this by a recent solicitation to give money by my alma mater that mentions a freshman seminar devoted to these topics. I think it's great that cutting-edge research (the Nature Neuroscience article is from 2008) makes it so quickly to undergrad classes. And as we understand the science of decision-making better, more philosophers will be able to base their age-old speculations on some actual data instead of armchair thoughts.
We all have the sensation of being "in control", but how do we decide whether a biological organism other than us possesses free will? Does a bonobo have it? A dolphin? A cockroach? A bacterium? Can philosophy alone offer any guidance? I don't think so. Samuel Johnson once remarked, "All theory is against the freedom of the will; all experience for it." But we know that our common-sense experience doesn't always match up to the physical world, as in our strange system for perceiving color and how it can be fooled. So simply feeling that we have free will doesn't mean we actually do. Maybe we don't.
I think it quite possible that we lack free will in any reasonable sense - that, in fact, our actions are essentially deterministic. Despite this, I also think that our feeling that we are "in control" has a plausible basis -- I guess this makes me a "compatibilist", like Daniel Dennett. But I have a slightly different take on why, which is probably not original, but which I've never seen discussed in philosophy texts, although someone has probably done so. Namely, I'd guess that our computational hardware and software is so complex that it is not easy to predict the outcome of any situation with high probability - and in particular, we cannot even know how we ourselves will react in any given situation. We probably can do a simulation in principle, but in practice such a simulation would take too much time. So although we don't have free will in actual fact, the unpredictability of our actions makes it appear we do to beings with limited computational resources, such as ourselves. I'm hopeful that the theory of computational complexity may eventually play a role in a generally-accepted solution to the conundrum that has baffled philosophers for centuries.
The experiments of Benjamin Libet and co-authors cast doubt on our perception of being "in control". Libet found that subjects had activity in their brains about 300 milliseconds before they were aware of their volition to press a button. A more recent study found brain activity as much as 10 seconds before subjects were aware of their own conscious decisions. This popular article in Wired addresses it; for more technical details, see the article in Nature Neuroscience.
I was motivated to mention this by a recent solicitation to give money by my alma mater that mentions a freshman seminar devoted to these topics. I think it's great that cutting-edge research (the Nature Neuroscience article is from 2008) makes it so quickly to undergrad classes. And as we understand the science of decision-making better, more philosophers will be able to base their age-old speculations on some actual data instead of armchair thoughts.
Wednesday, November 25, 2009
Stupid Philosopher Tricks: Thomas Nagel
In a previous post, I said, "Whenever scientific subjects are discussed, you can count on some philosopher to chime in with something really stupid."
Here's another example. Thomas Nagel, a philosopher of some repute, nominates Stephen Meyer's Signature in the Cell as his pick for book of the year in the Times Literary Supplement.
Does Nagel have any biological training? None that I could see. Does he know anything about evolution or abiogenesis? Not if he thinks Meyer has any valid contribution to make. Did he bother to check if biologists think Meyer's book is a good contribution to the literature? I doubt it. Did Nagel spot all the phony claims Meyer makes about information? I doubt it again.
Just to cite one: Meyer claims, over and over again, that information can only come from a mind -- and that claim is an absolutely essential part of his argument. Nagel, the brilliant philosopher, should see why that is false. Consider making a weather forecast. Meteorologists gather information about the environment to do so: wind speed, direction, temperature, cloud cover, etc. It is only on the basis of this information that they can make predictions. What mind does this information come from?
It's sad to see such an eminent philosopher (Nagel) make a fool of himself with this recommendation.
Here's another example. Thomas Nagel, a philosopher of some repute, nominates Stephen Meyer's Signature in the Cell as his pick for book of the year in the Times Literary Supplement.
Does Nagel have any biological training? None that I could see. Does he know anything about evolution or abiogenesis? Not if he thinks Meyer has any valid contribution to make. Did he bother to check if biologists think Meyer's book is a good contribution to the literature? I doubt it. Did Nagel spot all the phony claims Meyer makes about information? I doubt it again.
Just to cite one: Meyer claims, over and over again, that information can only come from a mind -- and that claim is an absolutely essential part of his argument. Nagel, the brilliant philosopher, should see why that is false. Consider making a weather forecast. Meteorologists gather information about the environment to do so: wind speed, direction, temperature, cloud cover, etc. It is only on the basis of this information that they can make predictions. What mind does this information come from?
It's sad to see such an eminent philosopher (Nagel) make a fool of himself with this recommendation.
Subscribe to:
Posts (Atom)