Sunday, June 05, 2016

Can Stephen Talbott Be Taken Seriously?


Stephen Talbott, one of the dreariest writers on subjects that should be interesting, manages once again to flail around a topic without saying much at all. He babbles meaningless garbage like "As we have seen, the life of the organism is itself the designing power. Its agency is immanent in its own being, and is somehow expressed at the very roots of material causation." And when he does manage to say something factual, he is, not surprisingly, wrong.

In his latest piece, Can Darwinian Evolutionary Theory Be Taken Seriously?, Talbott (who apparently has no advanced training in evolutionary biology) once again takes on the theory of evolution, without exhibiting much understanding at all.

Rather than write a complete critique, I'll just excerpt some of the stupider parts of his screed, with comments.

I would like to suggest that if half of all American citizens have become (as certain arch-defenders of biological orthodoxy like to put it) “science deniers”, then something important is afoot, and it does not look good for science. At the very least — if we assume the denial to be as unreservedly stupid as it is said to be — it would mean that science has massively and catastrophically failed our educational system.

As is usually the case with those who want to cast doubt on evolution, the fact that Americans have trouble accepting it is trotted out as something significant about the theory. Talbott makes no effort at all to look at acceptance in other countries because (I suspect) it would completely undermine what follows in his piece. After all, if you have to admit that the majority accepts evolution in Iceland, Denmark, Sweden, France, Japan, UK, Norway, Belgium, Spain, Germany, Italy, Netherlands, Hungary, Luxembourg, Ireland, Slovenia, Finland, Czechia, Estonia, Portugal, Malta, Switzerland, and so forth, then maybe ridiculously overblown claims like "science has massively and catastrophically failed our educational system" would be seen for what they are.

Now any fair-minded person knows very well what separates the US from the countries in the list above: it is that many Americans are under the grip of the appalling and anti-intellectual influence of fundamentalist Christianity. The evidence that religion is responsible is easily available and hard to contest. But the words "religion" and "Christianity" appear nowhere in Talbott's piece.

Organisms are not machines.

Of course they are. Anybody who says otherwise is simply being ridiculous. They obey the laws of physics like other machines. The only citation Talbott gives for this claim is his own work.

No one has ever pointed to a computer-like program in DNA, or in a cell, or in any larger structure. Nor has anyone shown us any physical machinery for executing such program instructions.

Of course they have! I wonder what Talbott thinks ribosomes do?

how can it be that, 150 years after Darwin, we still have no widely accepted theory about how all the different body plans arose?

Let's see... could it be, perhaps, because those events occurred hundreds of millions of years ago and didn't leave behind much trace for us to find now? After all, my grandparents arrived here from Russia in 1912-1913, but there is no widely accepted theory about how they got from their home in Vitebsk to Hamburg. Did they walk, or take a train, or use some other method? We don't have a "widely accepted theory" because the evidence is gone now.

If a beautiful, crystal-clear vision of “how evolution works” doesn’t give us answers to key questions about how evolution has in fact worked, perhaps we should begin to ask questions of the vision.

We know many different mechanisms of evolution. (Talbott seems not to know this.) If Talbott thinks there is another mechanism, why doesn't he propose one?

This enables us to greet with a certain recognition the nagging question that has bothered a number of the past century’s most prominent biologists: “What does natural selection select — where do selectable variations come from — and why should we think that the mere selection of already existing variants, rather than the creative production of novel variants in the first place, directs evolution along the trajectories we observe?”

Umm, we know where these variations come from. One place they come from is recombination in sexual organisms. Another source is mutation, often induced by cosmic rays. This is taught in every introductory course on evolutionary biology. So why doesn't Talbott know this?

What is life? How can we understand the striving of organisms — a striving that seems altogether hidden to conventional modes of understanding? What makes for the integral unity of every living creature, and how can this unity be understood if we’re thinking in purely material and machine-like terms? Does it make sense to dismiss as illusory the compelling appearance of intelligent and intentional agency in organisms? No one can deny that our answers to these questions could be critically important even for the most basic understanding of evolution. But we have no answers.

We have no answers to "What is life?"? Say what? Talbott doesn't seem to know that there are books devoted to this question, one of the most famous being by Schrödinger, and another one, more recently, by Addy Pross. The problem is not that we don't have answers -- many answers have been proposed. The problem is, like every complicated concept (even the philosopher's famous example of "chair" suffices) no single brief definition can capture all the nuances of the concept.

As for the other questions, I absolutely do deny that vague babble like "integral unity" has anything useful or helpful to say in trying to understand biology. And there hasn't been a single advance in biology that comes from thinking in other than "purely material" terms. If there had been, you know Talbott would have shouted it to the rooftops.

Talbott does no experiments in evolution. He publishes no papers in evolutionary biology journals. As far as I can see, he has no expertise in evolution at all. He publishes his stuff in obscure venues like New Atlantis. Why would anybody take this vapid stuff seriously? Answer: you take it seriously if you're a creationist. No one else should.

P. S. The Nature Institute, where Talbott works, is apparently strongly influenced by Rudolf Steiner, the cult leader and quack who is responsible for the nutty Waldorf schools. Big surprise.

Wednesday, May 25, 2016

Actual Neuroscientists Cheerfully Use The Metaphors Epstein Says are Completely Wrong


Here is yet more evidence that psychologist Robert Epstein is all wet when he claims that computation-based metaphors for understanding the brain are factually wrong and hindering research.

Actual research neuroscientists, summarizing what we know about memory, cheerfully use phrases like "storage of information", "stored memory information", "information retrieval", "information storage", "the systematic process of collecting and cataloging data", "retriev[ing]" of data, and so forth. Epstein claims the brain does not form "representations of visual events", but these researchers say "Memory involves the complex interplay between forming representations of novel objects or events...". The main theme of the essays seems to be that spines and synapses are the fundamental basis for memory storage.

So who do you think is likely to know more about what's going on in the brain? Actual neuroscientists who do research on the brain and summarize the state of the art about what is known in a peer-reviewed journal? Or a psychologist who publishes books like The Big Book of Stress-Relief Games?

Hat tip: John Wilkins.

P. S. Yes, I saw the following "Further, LTP and LTD can cooperate to redistribute synaptic weight. This notion differs from the traditional analogy between synapses and digital information storage devices, in which bits are stored and retrieved independently. On the other hand, coordination amongst multiple synapses, made by different inputs, provides benefits with regard to issues of normalization and signal-to-noise." Again, nobody thinks that the brain is structured exactly like a modern digital computer. Mechanisms of storage and retrieval are likely to be quite different. But the modern theory of computation makes no assumptions that data and programs are stored in any particular fashion; it works just as well if data is stored on paper, disk, flash drive, or in brains.

Saturday, May 21, 2016

Epstein's Dollar Bill and What it Doesn't Prove About the Brain


I hate to pick on poor confused Robert Epstein again, but after thinking about it some more, I'd like to explain why an example in his foolish article doesn't justify his claims.

Here I quote his example without the accompanying illustrations:

In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill – ‘as detailed as possible’, I say – on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences.

Because you might never have seen a demonstration like this, or because you might have trouble imagining the outcome, I have asked Jinny Hyun, one of the student interns at the institute where I conduct my research, to make the two drawings. Here is her drawing ‘from memory’ (notice the metaphor):

And here is the drawing she subsequently made with a dollar bill present:

Jinny was as surprised by the outcome as you probably are, but it is typical. As you can see, the drawing made in the absence of the dollar bill is horrible compared with the drawing made from an exemplar, even though Jinny has seen a dollar bill thousands of times.

What is the problem? Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it and use it to make our drawing?

Obviously not, and a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.

Now let me explain why Epstein's example doesn't even come close to proving what he thinks it does.

First, the average person is not very good at drawing. I am probably much, much worse than the average person in this respect. When I play "pictionary", for example, people always laugh at my stick figures. Yet, given something to look at and copy, I can do a reasonable job of copying what I see. I, like many people, have trouble converting what I see "in my mind's eye" to a piece of paper. So it is not at all surprising to me that the students Epstein asks to draw a dollar bill produce the results he displays. His silly experiment says nothing about the brain and what it "stores" at all!

Second, Epstein claims that the brain stores no representation of a dollar bill whatsoever. He is pretty unequivocal about this. So let me suggest another experiment that decisively refutes Epstein's claim: instead of asking students to draw a dollar bill (an exercise which evidently is mostly about the artistic ability of students), instead give them five different "dollar bills", four of which have been altered in some fairly obvious respect. For example, one might have a portrait of Jefferson instead of Washington, another might have the "1" in only two corners instead of all four corners, another might have the treasury seal in red instead of the typical green for a federal reserve note, etc. And one of the five is an ordinary bill. Now ask them to pick out which bills are real and which are not. To make it really precise, each student should get just one bill and not be able to see the bills of others.

Here's what I will bet: students will, with very high probability, be able to distinguish the real dollar bill from the altered ones. I know with certainty that I can do this.

Now, how could one possibly distinguish the real dollar bills from the fake ones if one has no representation of the real one stored in the brain?

And this is not pure speculation: thousands of cashiers every day are tasked with distinguishing real bills from fake ones. Somehow, even though they have no representation of the dollar bill stored in their brain, they manage to do this. Why, it's magic!

Thursday, May 19, 2016

Yes, Your Brain Certainly Is a Computer


- Did you hear the news, Victoria? Over in the States those clever Yanks have invented a flying machine!

- A flying machine! Good heavens! What kind of feathers does it have?

- Feathers? It has no feathers.

- Well, then, it cannot fly. Everyone knows that things that fly have feathers. It is preposterous to claim that something can fly without them.

OK, I admit it, I made that dialogue up. But that's what springs to mind when I read yet another claim that the brain is not a computer, nor like a computer, and even that the language of computation is inappropriate when talking about the brain.

The most recent foolishness along these lines was penned by psychologist Robert Epstein. Knowing virtually nothing about Epstein, I am willing to wager that (a) Epstein has never taken a course in the theory of computation (b) could not pass the simplest undergraduate exam in that subject (c) does not know what the Church-Turing thesis is and (d) could not explain why the thesis is relevant to the question of whether the brain is a computer or not.

Here are just a few of the silly claims by Epstein, with my commentary:

"But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently."

-- Well, Epstein is wrong. We, like all living things, are certainly born with "information". To name just one obvious example, there is an awful lot of DNA in our cells. Not only is this coded information, it is even coded in base 4, whereas modern digital computers use base 2 -- the analogy is clear. We are certainly born with "rules" and "algorithms" and "programs", as Frances Crick explains in detail about the human visual system in The Astonishing Hypothesis.

"We don’t store words or the rules that tell us how to manipulate them."

-- We certainly do store words in some form. When we are born, we are unable to pronounce or remember the word "Epstein", but eventually, after being exposed to enough of his silly essays, suddenly we gain that capability. From where did this ability come? Something must have changed in the structure of the brain (not the arm or the foot or the stomach) that allows us to retrieve "Epstein" and pronounce it whenever something sufficiently stupid is experienced. The thing that is changed can reasonably be said to "store" the word.

As for rules, without some sort of encoding of rules somewhere, how can we produce so many syntactically correct sentences with such regularity and consistency? How can we produce sentences we've never produced before, and have them be grammatically correct?

"We don’t create representations of visual stimuli"

-- We certainly do. Read Crick.

"Computers do all of these things, but organisms do not."

-- No, organisms certainly do. They just don't do it in exactly the same way that modern digital computers do. I think this is the root of Epstein's confusion.

Anyone who understands the work of Turing realizes that computation is not the province of silicon alone. Any system that can do basic operations like storage and rewriting can do computation, whether it is a sandpile, or a membrane, or a Turing machine, or a person. Today we know (but Epstein apparently doesn't) that every such system has essentially the same computing power (in the sense of what can be ultimately computed, with no bounds on space and time).

"The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors."

-- This is just utter nonsense. Nobody says "all computers are capable of behaving intelligently". Take a very simple model of a computer, such as a finite automaton with two states computing the Thue-Morse sequence. I believe intelligence is a continuum, and I think we can ascribe intelligence to even simple computational models, but even I would say that this little computer doesn't exhibit much intelligence at all. Furthermore, there are good theoretical reasons why finite automata don't have enough power to "behave intelligently"; we need a more powerful model, such as the Turing machine.

The real syllogism goes something like this: humans can process information (we know this because humans can do basic tasks like addition and multiplication of integers). Humans can store information (we know this because I can remember my social security number and my birthdate). Things that both store information and process it are called (wait for it) computers.

"a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found."

-- Of course, this is utter nonsense. If there were no representation of any kind of a dollar bill in a brain, how could one produce a drawing of it, even imperfectly? I have never seen (just to pick one thing at random) a crystal of the mineral Fletcherite, nor even a picture of it. Ask me to draw it and I will be completely unable to do so because I have no representation of it stored in my brain. But ask me to draw a US dollar bill (in Canada we no longer have them!) and I can do a reasonable, but not exact job. How could I possibly do this if I have no information about a dollar bill stored in my memory anywhere? And how is that I fail for Fletcherite?

"The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous"

-- Well, it may be preposterous to Epstein, but there is at least evidence for it, at least in some cases.

"A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks."

-- So what? What does this have to do with anything? There is no requirement, in saying that the brain is a computer, that memories and facts and beliefs be stored in individual neurons. Storage that is partitioned in various ___location, "smeared" across the brain, is perfectly compatible with computation. It's as if Epstein has never heard of digital neural networks, where one can similarly say that a face is not stored in any particular ___location in memory, but rather distributed across many of them. These networks even exhibit some characteristics of brains, in that damaging parts of them don't entirely get rid of the stored data.

"My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.

"That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms."

-- This is perhaps the single stupidest passage in Epstein's article. He doesn't seem to know that "keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery" is an algorithm. Tell that description to any computer scientist, and they'll say, "What an elegant algorithm!". In exactly the same way, the way raster graphics machines draw a circle is a clever technique called "Bresenham's algorithm". It succeeds in drawing a circle using linear operations only, despite not having the quadratic equation of a circle (x-a)2 + (y-b)2 = r2 explicitly encoded in it.

But more importantly, it shows Epstein hasn't thought seriously at all about what it means to catch a fly ball. It is a very complicated affair, involving coordination of muscles and eyes. When you summarize it as "the simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery", you hide all the amazing amount of computation and algorithms that are going on behind the scenes to coordinate movement, keep the player from falling over, and so forth. I'd like to see Epstein design a walking robot, let alone a running robot, without any algorithms at all.

"there is no reason to believe that any two of us are changed the same way by the same experience."

-- Perhaps not. But there is reason to believe that many of us are changed in approximately the same way. For example, all of us learn our natural language from parents and friends, and we somehow learn approximately the same language.

"We are organisms, not computers. Get over it."

-- No, we are both organisms and computers. Get over it!

"The IP metaphor has had a half-century run, producing few, if any, insights along the way."

-- Say what? The computational model of the brain has had enormous success. Read Crick, for example, for an example of how the computational model has had some success in modeling the human visual system. Here's an example from that book I give in my algorithms course at Waterloo: why is it that humans can find a single red R in a field of green R's almost instantly whether there are 10 or 1000 letters, or a single red R in a field of red L's almost as quickly, but has trouble finding the unique green R in a large sea of green L's and red R's and red L's? If you understand algorithms and the distinction between parallel and sequential algorithms, you can explain this. If you're Robert Epstein, I imagine you just sit there dumbfounded.

Other examples of successes include artificial neural nets, which have huge applications in things like handwriting recognition, face recognition, classification, robotics, and many other areas. They draw their inspiration from the structure of the brain, and somehow manage to function enormously well; they are used in industry all the time. If that is not great validation of the model, I don't know what is.

I don't know why people like Epstein feel the need to deny things for which the evidence is so overwhelming. He behaves like a creationist in denying evolution. And like creationists, he apparently has no training in a very relevant field (here, computer science) but still wants to pontificate on it. When intelligent people behave so stupidly, it makes me sad.

P. S. I forgot to include one of the best pieces of evidence that the brain, as a computer, is doing things roughly analogous to digital computers, and certainly no more powerful than our ordinary RAM model or multitape Turing machine. Here it is: mental calculators who can do large arithmetic calculations are known, and their feats have been catalogued: they can do things like multiply large numbers or extract square roots in their heads without pencil and paper. But in every example known, their extraordinary computational feats are restricted to things for which we know there exist polynomial-time algorithms. None of these computational savants have ever, in the histories I've read, been able to factor arbitrary large numbers in their heads (say numbers of 100 digits that are the product of two primes). They can multiply 50-digit numbers in their heads, but they can't factor. And, not surprisingly, no polynomial-time algorithm for factoring is currently known, and perhaps there isn't one.

Sunday, May 08, 2016

Give Carol Wainio an Honorary Degree in Journalism


Carol Wainio catalogues once again a list of Margaret Wente's journalistic transgressions.

This is a story that Canadian media is not addressing with much intellectual honesty. Take this grotesque column by Emma Teitel, for example.

What journalism school in Canada will be brave enough and honest enough to recognize Wainio's work -- for example, by awarding her an honorary degree?

Wednesday, May 04, 2016

Christian god to Ted Cruz: "Drop Dead"


Poor Ted Cruz.

First, his god tells him to run for President of the US.

Then, his god humiliates him in primary after primary.

Ted wanted his Republican opponents to pray and drop out of the race.

But it didn't quite work out that way.

What a capricious puppet-master god Ted worships!

Sunday, May 01, 2016

Columnist Margaret Wente Caught Plagiarizing Again


Although columnist Margaret Wente has been caught plagiarizing yet one more time by visual artist Carol Wainio, the Globe and Mail refuses to take real responsibility for it, saying only that "This work fell short of our standards, something that we apologize for. It shouldn’t have happened and the Opinion team will be working with Peggy to ensure this cannot happen again."

But it has happened again. Again and again and again and again.

Any reputable newspaper would have, in my opinion, fired Wente long ago. Inexplicably, the National Post's Terence Corcoran actually defends Wente. He can only do so by keeping his eyes firmly shut.

Tuesday, April 26, 2016

Trump Haiku


Craig Kaplan, my brilliant and whimsical colleague, has invented a twitterbot, trump575, that tweets haikus constructed from the opus of Donald Trump. You can follow it here.

Sunday, April 24, 2016

You are Not Allowed to Laugh at the Lies and Idiocies of the Right!


Somebody sent me a link to this piece by Emmett Rensin at Vox.

The author's thesis is that liberals have stopped thinking and spend all their time being smug instead -- but this is certainly not true of conservatives. Liberals, according to Rensin, "hate their former allies". Conservatives, by contrast, are open-minded and persuadable. And, Rensin says, The Daily Show is a perfect example of this liberal smugness.

Well, Rensin goes wrong right there. "Smug" is not even close to the right word to describe Jon Stewart. Bill Maher is smug. Jon Stewart is, at times, almost painfully earnest. Does he make fun of people? Absolutely. But modern conservatism has so many targets that the jokes write themselves: Ben Carson and his pyramids that stored grain. Donald Trump and his claim that he saw "thousands and thousands" of American Muslims celebrating the 9/11 attacks. Ted Cruz and his "Trus-Ted" slogan, when his record of public dishonesty is hard to deny. Rensin apparently thinks we are not allowed to poke fun at all this idiocy and dishonesty.

Here are some examples of liberal smug ignorance, according to Rensin: "the Founding Fathers were all secular deists". Well, that's clearly not so, but some were, at least during part of their life, like Thomas Paine and Ethan Allen. But how is this mistake worse than the conservative claim that "94 percent of the [the era of the Founders'] documents were based on the Bible" (debunked here)?

Another one: "that you're actually, like, 30 times more likely to shoot yourself than an intruder". Perhaps the number "30" is wrong, but that doesn't mean that there isn't a significant health risk in owning a gun. And how is this mistake worse than the conservative insistence on "more guns mean less crime"? Pro-gun "researchers" such as Kleck and Lott are treated by conservatives as unimpeachable, when in fact their errors are extensively documented.

Rensin's thesis is essentially a denigration of the importance of knowledge and facts. Who cares, Rensin says implicitly, if watching Fox News makes you less well informed? Pointing that out is just liberal smugness. Knowledge and facts are just unimportant compared to empathy and open-mindedness, which liberals today lack (while, presumably, conservatives have it in spades). Pay no attention the fact that when President Obama cited empathy as a desirable characteristic in a Supreme Court justice, conservatives jumped all over him.

Open-mindedness is a virtue -- I'll agree with that. But open-mindedness without skepticism and facts and knowledge just becomes credulity, a willingess to believe anything if it confirms your world view.

Here are just a few of the things that conservatives "know" that just ain't so: that Al Gore claimed to have invented the Internet (debunked here); that Bill Clinton delayed air traffic while he was having a haircut (debunked here); that Hillary Clinton was fired from the Watergate investigation for incompetence (debunked here). Visit any conservative website, mention Al or Bill or Hillary, and you'll only have to wait a few minutes before one of these lies is dragged out yet again. I have grad-school-educated conservative friends that proudly repeat these stories, ferchrissake.

Rensin claims that all this liberal smugness has "corrupt[ed]" them, but he gives no examples of corruption. He claims the case against conservatives is "tenuous", but just dismisses evidence like that given above and his own article.

Rensin thinks it is somehow "smug" for atheists to point out the religious hypocrisy of Kim Davis. It is here that his argument (and I use the term generously) becomes the most unhinged. Is it really necessary to be a Christian to criticize Christians? Do you have to believe in the divinity of Jesus or be a professional theologian to point out that Kim Davis cannot find support for her actions in Christian theology? When Mike Huckabee opportunistically elbowed out Ted Cruz to be at Kim Davis's rally, Rensin finds Huckabee genuine and admirable, instead of the pandering opportunity it clearly was.

Rensin is rhetorically dishonest. At one point he tries to refute a claim about the Ku Klux Klan by citing statistics about Stormfront.org. But these are entirely different groups.

Rensin is upset that the Daily Show is "broadcast on national television". Has he never listened to Fox News? Or conservative radio hosts with huge audiences, like Mark Levin and Michael Savage? The vitriol and the outright lies that happen every single day in these venues make Jon Stewart look like gentle fun.

Rensin claims that only Democrats have "made a point of openly disdaining" the dispossessed. One can only make that claim by wilfully ignoring the time Donald Trump made fun of a disabled reporter, or the time a Republican congressional candidate called poor people slothful and lazy, or Mitt Romney's comment that he could never convince 47% of the American people that "they should take personal responsibility and care for their lives".

Rensin thinks liberal smugness is going to ensure a Trump victory: "Faced with the prospect of an election between Donald Trump and Hillary Clinton, the smug will reach a fever pitch: six straight months of a sure thing, an opportunity to mock and scoff and ask, How could anybody vote for this guy? until a morning in November when they ask, What the fuck happened?". Yet who is a better match for the word smug? Hillary Clinton? Bernie Sanders? Look, when even Bill Maher calls you smug, you know you've got smug issues.

Finally, I observe that there doesn't seem to be any way to leave comments on Rensin's piece. That seems pretty smug to me.

Thursday, April 21, 2016

The Small Mind of the Conservative


Here is a splendid example of a certain kind of conservative mind: the kind that can't imagine how things could be any different, or why anyone would want them to be any different, from the way they are today. This kind of person always says, whenever anything novel is brought up, "But we've always done it this way!" Next, they go on to invent all sorts of silly reasons to avoid making any change.

Small-minded is what we used to call this trait, and it's particularly on display here. Mike Strobel, who despite once being Editor-in-Chief of the Toronto Sun doesn't seem to know the difference between "stationary" and "stationery", can't think of a single decent reason to turf the monarchy in Canada.

Instead, he believes keeping them around is a good idea because "the Trudeaus might declare themselves Canada’s royal family and we’d wake up one morning as subjects of King Justin". Perhaps the Queen will save Strobel someday by pushing him out of the way of an errant taxi. Those two preposterous scenarios are about equally likely.

Allan Fotheringham, a commentator that actually has connected brain cells, once said, "Grown-up nations do not need, as head of state, a woman -- however nice -- who lives across a large ocean in a castle in a foreign country." Someday Canada will grow up. Strobel, I'm not so sure about.

Wednesday, April 20, 2016

Under the Influence - An Amazingly Good Radio Show about Advertising


There are only a few radio shows I listen to regularly, but one of them is "Under the Influence" on CBC, an amazingly good show about advertising, hosted by Terry O'Reilly. I recommend it. O'Reilly may have a kind of whiny voice, but he seems to possess detailed knowledge about all facets of advertising, and he paints great pictures with his descriptions.

The latest show is about business-to-business advertising, and features a couple of famous commercials I had never seen before: the Jean-Claude van Damme ad for Volvo, and the "herding cats" ad for EDS.

Do you know any niche radio shows that are exceptionally good?

Tuesday, April 12, 2016

How to Be a Good Little Right-Wing Pundit


Right-wing pundits see bullies everywhere they look. But always on the Left, never on the Right.

Right-wing pundits see lynch mobs everywhere they look. But always on the Left, never on the Right.

A great example is Yale computer scientist David Gelernter. As I've pointed out before, when philosopher Thomas Nagel published a book about materialism, he got a lot of public criticism for his silly and uninformed views. But, according to Gelernter, all this criticism was downright unfair: he called Nagel's critics "punks, bullies, and hangers-on of the philosophical underworld" and a "lynch mob" and a "mass attack of killer hyenas".

But nobody picketed Nagel, or demanded he be fired from his academic job, or threatened to boycott journals where he published. They just criticized him.

Has Gelernter ever stood up for leftist professors who have been threatened with bodily harm or loss of their jobs for their opinions? Not that I've seen.

The latest right-wing pundit to get into hysterics is Brendan O'Neill. He calls transgenderism "intolerant". Novelist Ian McEwan was "subjected to a Twitch hunt", which is a "bloodsport". Critics of McEwan "went berserk" and engaged in "virtual tomato-throwing". It was "reminiscent" of "the Inquisition". The criticism was "attempted silencing". It was "straight out of Nineteen Eight-Four".

Yup, bullies and lynch mobs everywhere. Except that there aren't any lynch mobs. Nobody attacked McEwan physically. Nobody got in his face, or blocked his path, or threatened him. All critics did was take issue with what he said.

If you want to be a good little right-wing pundit, you have to learn this game. All criticism from the Left is "bullying". All criticism from the Right is "free speech". All criticism from the Left is just like the Inquisition. All criticism from the Right is brave disagreement with the status quo. All criticism from the Left is Orwellian. All criticism from the Right is the true spirit of democracy.

Good little right-wing pundit. Have a puppy treat.

Monday, April 11, 2016

NPR's Word Puzzle


NPR's Sunday puzzle last week was the following: find a five-letter word in which the position, in the alphabet, of the first letter is equal to the sum of the positions of the last four letters.

This week they gave the following answers: maced, table, whack, and zebra.

More generally, one could ask the same question for words of other lengths. Here are a few I found:

cab
hag
jade
leaf
leg
mage
mica
mid
need
pig
rale
real
ride
same
sand
seam
toad
toe
vial
vim
weeded
wend
who
wick
win
yet
yip
zeta
zinc

So "weeded" seems to be the longest word in English with this property. Can you find a longer one?

They also talked about words like "easy" in which the position of the last letter is equal to the sum of the positions of the preceding letters. I found the following other examples:

abbot
ally
away
babe
bail
bendy
bidet
bleat
boar
cachet
debit
dim
draw
eager
fag
feces
flew
gnu
habit
hair
hem
hoax
how
idly
jaggy
joy
kit
lam
man
neat
pact
paddy
sex
tabby
tau
wax

So the longest seems to be "cachet". Can you find a longer one?

Sunday, April 10, 2016

They Offer Nothing But Lies, 6


Once again, the creationists are telling fibs about information theory. Are they dishonest, or just stupid? In the case of Denyse O'Leary, I'm inclined to suspect the latter:

The belief that randomness produces information (central to Darwinism) is obviously false. It’s never been demonstrated because it can’t be. It is assumed.

No, it's not "assumed". It's proved. It's one of the most basic results in Kolmogorov information theory, demonstrated every year in the classes I teach. With high probability, a randomly-generated list of symbols will contain a lot of information. To understand this you can use one of Dembski's own metaphors: the combination lock. Which will be harder for someone to deduce, a combination that is your birthday in the form mmdd, or the first four digits of pi, or a randomly-generated 4-digit code?

This does not seem to penetrate the skull of the rather dense Ms. O'Leary, who then tries to weasel out of her claim by saying

by "information," one means here complex, specified information, produced in vast interlocking patterns on a regular basis.

Oh, so she's not talking about "information" in the way it is used by mathematicians and computer scientists. She's talking about creationist information, that vague, incoherent, and self-contradictory mess invented by Dembski and used by basically no one except creationists.

That mess was debunked years ago.

Here's an example: take any English text T, like the first 10 lines of a Shakespearean sonnet. Now apply any decent encryption function f to it that is not known to an adversary, getting U. To the adversary, U will look like random noise and hence be "unspecified", so it will not constitute creationist information. Now I come along and apply f-1 to U, getting T back again. Voilà! I have now magically created information deterministically, something Dembski claims is impossible.

No matter how many times you explain this, creationists offer nothing but lies in response.

Saturday, April 09, 2016

Margaret Russell on Mississippi's Anti-Gay Law


Here's my old pal Margaret "Peggy" Russell, professor of law at Santa Clara, speaking on KQED about the new anti-gay law passed in Mississippi.

Mississippi is one of three US states I've never visited. I probably won't visit while this law is in effect.

Thursday, April 07, 2016

Another Day, Another Right-Wing Quote Lie


It seems that pretty much every day of the week, one can find right-wing spokesmen using fake quotes to justify their beliefs.

Today's lying wingnut is Sarah Palin, who gave a sing-song speech-like thingie in Wisconsin supporting Donald Trump to barely any applause at all. Near the end (at the 20:30 mark of the video), she says, "Well, General George Patton, he said it best, he -- leading the greatest generation -- he said 'Politicians are the lowest form of life on earth', he said it, I didn't, OK? he said it. And he said, 'Liberal Democrats are the lowest form of politicians.' "

Well, no, Patton didn't. This was debunked months ago.

Sarah Palin, like most of her wingnut friends, is completely uninterested in the truth. All she cares about is having a cudgel to beat Democrats with.

Wednesday, April 06, 2016

These Lawyers are All ASSoLs


This is pretty funny: two donors paid off George Mason University to the tune of $30 million to change the name of their mediocre law school (rated #40 in the US by one measure) to the "Antonin Scalia School of Law".

I guess nobody noticed at the time that the acronym "ASSoL" was really, really appropriate. At least not for a while. But now they've quietly changed their public presence to the "The Antonin Scalia Law School at George Mason University".

That won't prevent everyone else from calling them ASSoLs, though.

Tuesday, April 05, 2016

Cold-FX Lawsuit May Be a Remedy for False Health Claims


Cold-FX, a drugstore remedy hawked by Canadian fashion icon Don Cherry, is the subject of a lawsuit alleging the makers "ignored their own research and misled consumers about the short-term effectiveness of the popular cold and flu remedy". Cold-FX is basically just some sort of ginseng extract, although they give it the fancy name "CVT-E002". The suit was brought by Don Harrison of Vancouver Island.

Questions about the efficacy of Cold-FX have been raised for years.

Whether or not the claims of Cold-FX are false -- nothing has been proven in court yet -- there is no question that there is a lot of fraud in the over-the-counter pharmacy market, including worthless homeopathic remedies marketed as being effective against a wide variety of illnesses.

Hopefully this lawsuit, whether it succeeds or not, will make pharmaceutical companies much more diligent about ensuring the veracity of their claims.

Friday, March 25, 2016

Et in Arcadia Ego


Monday, March 14, 2016

The Future of Recursivity


Hello, readers! I'm pleased to announce that I've joined freethoughtblogs.com, home of P. Z. Myers and other interesting bloggers.

This doesn't mean that this blog will die. I intend to cross-post things here and there. Comment wherever you like.

More info later, as I learn how to use the new system.

Thursday, March 03, 2016

James Tour's First Talk: Nanotechnology and God


The dilemma of the scientist who is also a devout Christian* is clear: on the one hand, in his/her professional life the scientist must explore the natural world and rigorously apply skepticism to his/her conclusions. The scientist is always asking, "Could there be some other explanation I haven't thought of?". On the other hand, the devout Christian is required to accept an incoherent and nonsensical theology, and to renounce other reasonable explanations for the events that supposedly occurred in the New Testament. Skepticism is replaced by faith.

This is one reason, I suspect, that Christians are more common in the physical sciences and less common in fields like anthropology, psychology, and sociology. A really serious anthropologist or sociologist or psychologist who is a believer would be, I suspect, consumed by trying to understand the personality, motives, and characteristics of the Christian god, and that can only be done with some rigor through a scientific study, which the Christian is explicitly forbidden to do by Matthew 4:7, Deuteronomy 6:16, and Luke 4:12. In contrast, the chemist, geologist, or physicist is able to compartmentalize his/her beliefs more successfully. Believing in Jesus is not going to strongly impact your experiments if you are studying the properties of organic compounds or the interactions and decays of baryons.

Compartmentalization is the key word here, and it was very much in evidence in last night's Pascal lecture. To say it was a lecture is somewhat overstating the case. It started with a commercial (for James Tour's lab and its admittedly excellent work in nanotechnology) and it finished with pure Christian evangelism. Given that the title was "Nanotechnology and God", I was expecting a somewhat more polished segue between the two topics (although it was too much to hope that he might have remarked that both topics concern the vanishingly small). There was none. Prof. Tour went in one sentence from a summary of his work on nanotechnology to a story of how he became a Messianic Jew. Along the way he admonished the audience in various ways: to abjure pornography, to pray for personal success, to read the bible every day.

Here are a few of the things that Prof. Tour seems to believe, as I understand it. First, that the "fact of the Resurrection is overwhelming". (I don't think he uses the term "fact" the way I do.) Second, that he obtained his wife because he prayed for one and his god granted his wish. Third, that he was offered money to buy more software (through some supernatural intervention?) because of his virtuous refusal to break the license of software he used on other machines. Fourth, that he prays before every course lecture and scientific talk that his presentation will be wonderful. Fifth, that he prayed for the success of a disliked colleague and this resulted in the success of the colleague and his transfer to another university. Honestly, it was really hard to not laugh at all that.

I find it fascinating that such a scientist -- evidently extremely clever -- can successfully convince himself that there is a supernatural being, the creator of the universe, who is so obsessively concerned in that scientist's success and life that he (the god) arranges things so that departmental money to buy software arises (because of the scientist's virtue) and so that rival colleagues get offers to leave.

I did get a chance to ask a question. After hearing this litany of successes that Prof. Tour had achieved through prayer, I asked him what percentage of things he prays for don't come true. He was unable to provide a figure. This is compartmentalization again. A scientist would, I would think, want to know this important fact. Do certain kinds of prayers work better than others? Does the time of day affect success? Is success actually greater than chance, or does Prof. Tour simply forget about the prayers that don't come true? Does it help if the prayer is said in certain languages? Are prayers for oneself granted more often than prayers for others? What happens when two equally virtuous people pray for opposite outcomes?

Another thing that I was able to establish was that Prof. Tour, despite being a signatory to the Discovery Institute's notorious "Scientific Dissent from Darwinism", has never read a college-level textbook about evolution. In my opinion, this is irresponsible (but not surprising).

In short, although Prof. Tour is a good speaker who clearly has done excellent work, his religious beliefs seem (to me) to be childish and unwarranted. His personal version of prosperity theology is laughable. The event was largely evangelical and not intellectual in nature, and is inappropriate to be sponsored and endorsed by a public university that accepts students of all faiths. It was, in short, another embarrassment.

* I phrase it this way to avoid the ambiguous term "Christian scientist".

Wednesday, March 02, 2016

The Pascal Lecture, James Tour, and Shallit's Law


Well, once again it's time for that annual embarrassment at the University of Waterloo, the Pascal lecture series.

Our university actually sponsors these lectures, which are designed to "[challenge] the university to a search for truth through personal faith and intellectual inquiry which focus on Jesus Christ." I think it's completely inappropriate for a secular university to evangelize for a particular religion in this manner.

Previous lectures that I've attended and written about include Mary Poplin (also see here and here); the late Charles Rice (also see here and here); and John Lennox (also see here and here and here).

This year the speaker is James Tour, a chemist and signer of the infamous Discovery Institute dissent from Darwinism letter. You can see Tour's own rambling account of his dissatisfaction with evolution here. I hope he's a better speaker than he's a writer.

For more about Tour, see Larry Moran's take here.

I'm going to try to go, but I may be too jet-lagged to do so. In any event, I want to recall a law I have modestly named after myself: Shallit's law. Here it is:

"Whenever a distinguished scientist, physician, or engineer claims that he or she `doesn't understand' evolution, or `encourages skepticism' about evolution or that evolution `skeptics' are poorly treated, some fatuous utterance about Jeebus will soon follow."

You can evaluate the accuracy of Shallit's law by attending Tour's lectures, I suppose.

P. S. The Pascal lecture committee, as well as other dubious sites, like to cite Tour as "one of the 50 most influential scientists in the world" as stated by thebestschools.org. But thebestschools.org is a project of none other than James Barham, the ID-friendly but extremely confused philosopher who testified for the creationists in Kansas. In other words, it is not an unbiased source. In fact, I don't see much evidence that it's anything more than just James Barham sitting in a basement somewhere.

Wednesday, February 17, 2016

What Scalia Was Truly Like


If you want to get a feel for what the late Supreme Court justice Antonin Scalia was like, you can do no better than to read this long interview from three years ago.

Some highlights: despite being so "brilliant", Scalia was unsure about the pronunciation of the word "ukase" and wasn't familiar with the term "tell" as applied to poker. I am neither a lawyer nor a poker player, but I knew both of these. And I'm not particularly bright.

Scalia also knew nothing about linguistics, if he thought "Words have meaning. And their meaning doesn’t change." That's an extremely naive view of language and meaning. In reality, the meaning of words is fuzzy and smooshed out. And meaning changes all the time. Compare our current understanding of "nubile" with the dictionary definition from a dictionary 50 years ago.

Scalia read the Wall Street Journal and the Moonie-controlled Washington Times, but stopped reading the Washington Post because it was "slanted and often nasty". He didn't read the New York Times at all. Talk about being unaware of your own biases!

Scalia believed that the "Devil" is a real person because it is Catholic dogma (and by implication, because one cannot be a Catholic without accepting all of Catholic dogma). That's exactly the kind of black-and-white extremist viewpoint it takes to be an originalist. He thought this being was occupied in getting people not to believe in the Christian god. And he liked The Screwtape Letters, easily the stupidest of C. S. Lewis's output (and that's saying something). Scalia justified his belief by saying "Many more intelligent people than you or me have believed in the Devil." Yeah, well, many more intelligent people than I believe in Scientology, Bigfoot, and alien abductions, but that isn't a good argument for them. He also said that the Devil's becoming cleverer was "the explanation for why there’s not demonic possession all over the place. That always puzzled me. What happened to the Devil, you know? He used to be all over the place." The other explanation -- that there is no Devil and demonic possession never happened (it was health conditions misinterpreted by an ignorant and superstitious populace) -- was too obviously correct for him to consider.

Scalia thought that the only two possible choices after his death were "I'll either be sublimely happy or terribly unhappy." The obvious correct choice -- namely that he would simply cease to be -- did not even enter his mind as a possibility.

Scalia thought he was "heroic" by not recusing himself in a case where he clearly should have recused himself.

Reading this interview I could only think: What an asshole! Good riddance.

Monday, February 15, 2016

My Scalia Experience


Now that Supreme Court justice Antonin Scalia has died, one can find tributes to him everywhere, even from some liberals. He is being lauded for his intelligence and for being a nice guy in person.

Well, my Scalia experience is different. First, he may have been extremely intelligent, but even intelligent people can have blind spots. For Scalia, one obvious blind spot was the theory of evolution. Not only did he not understand the status of the theory among scientists, as Stephen Gould famously pointed out, but he also recently used the figure "5000 years" as an estimate for the age of humanity, when the actual figure is more like 100,000 to 200,000 years.

And as for being a nice guy, I can only tell about my own experience. Sometime in the late-1980's (I think it was 1987) he came to give a speech at the University of Chicago when I was teaching there. At the end of the talk there was time for questions. I asked a question -- and I don't really remember what it was about -- and Scalia got all huffy. He said something like, "I don't think that's appropriate for me to answer. In fact, it was completely inappropriate for you to ask."

Well, it wasn't. It was something definitely appropriate and about constitutional law, even if I don't quite remember what I asked. What I remember was the contempt he expressed in his words and body language that anyone would dare ask.

So maybe it's true, as some have said, that he was a wonderful guy with a great sense of humor and enormous intelligence. All I can say as an outsider is, not in my experience.

Yet Another Dubious Journal


From a recent e-mail message I received:

Dear Dr. Jeffrey Shallit,

Greetings from Graphy Publications

We kindly invite you to join the editorial board for International Journal of Computer & Software Engineering

The journal aims to provide the most complete and reliable source of information on current developments in the field of computer & software engineering. The emphasis will be on publishing quality articles rapidly and making them freely available to researchers worldwide. The journal will be essential reading for scientists and researchers who wish to keep abreast of the latest developments in the field.

International Journal of computer & software engineering is an international open access journal using online automated Editorial Managing System of Graphy Publications for quality review process. For more details please go through below link.

http://www.graphyonline.com/journal/journal_editorial_board.php?journalid=IJCSE

Hope you accept our invitation and you are requested to send us your recent passport size photo (to be displayed on the Journal’s website), C.V, short biography (150 words) and key words of your research interests for our records.

We are keenly looking forward to receiving your positive response

Yours sincerely,

J. Hemant
Managing Editor
International Journal of Computer & Software Engineering
Graphy Publications

Any journal of "computer & software engineering" that invites me to be on the editorial board, when I don't work in either computer engineering or software engineering, is clearly not to be taken seriously. Other bad signs: random capitalization of invitation letter, failure to end sentences with the proper punctuation, and an editorial board filled with people I've never heard of. Not surprisingly, the publisher, "Graphy Publications", is on Beall's List of Potential, possible, or probable predatory scholarly open-access publishers.

Thursday, February 11, 2016

Reproducibility in Computer Science


There has been a lot of discussion lately about reproducibility in the sciences, especially the social sciences. The result that garnered the most attention was the Nosek study, where the authors tried to reproduce the results of 98 studies published in psychology journals. They found that they were able to reproduce only about 40% of the published results.

Now it's computer science's turn to go under the spotlight. I think this is good, for a number of reasons:

  1. In computer science there is a lot of emphasis placed on annual conferences, as opposed to refereed journal articles. Yes, these conferences are usually refereed, but the reports are generally done rather quickly and there is little time for revision. This emphasis has the unfortunate consequence that computer science papers are often written quite hastily, a week or less before the deadline, in order to make it into the "important" conferences of your area.

  2. These conferences are typically quite selective and accept only 10% to 30% of all submissions. So there is pressure to hype your results and sometimes to claim a little more than you actually got done. (You can rationalize it by saying you'll get it done by the time the conference presentation rolls around.)

    (In contrast, the big conferences in mathematics are often "take-anything" affairs. At the American Mathematical Society meetings, pretty much anyone can present a paper; they sometimes have a special session for the papers that are whispered to be junk or crackpot stuff. Little prestige is associated with conferences in mathematics; the main thing is to publish in journals, which have a longer time frame suitable for good preparation and reflection.)

  3. A lot of research in computer science, especially the "systems" area, seems pretty junky to me. It always amazes me that in some cases you can get a Ph.D. just for writing some code, or, even worse, just modifying a previous graduate student's code.

  4. Computer science is one of the areas where reproducibility should (in theory) be the easiest. Usually, no complicated lab setups or multimillion dollar equipment is needed. You don't need to recruit test subjects or pass through ethics reviews. All you have to do is compile something and run it!

  5. A lot of computer science research is done using public funds, and as a prerequisite for obtaining those funds, researchers agree to share their code and data with others. That kind of sharing should be routine in all the sciences.
Now my old friend and colleague Christian Collberg (who has one of the coolest web pages I've ever seen) has taken up the cudgel of reproducibility in computer science. In a paper to appear in the March 2016 issue of Communications of the ACM, Collberg and co-authors Todd Proebsting and Alex M. Warren relate their experiences in (1) trying to obtain the code described in papers and then (2) trying to compile and run it. They did not attempt to reproduce the results in papers, just the very basics of compiling and running. They did this for 402 (!) papers from recent issues of major conferences and journals.

The results are pretty sad. Many authors had e-mail addresses that failed (probably because they moved on to other institutions or left academia). Many simply did not reply to the request for code (in some cases Collberg filed freedom of information requests to try to get it). Of those that did reply, their code failed for a number of different reasons, like important files missing. Ultimately, only about a half of all papers had code that passed the very basic tests of compiling and running.

This is going to be a blockbuster result when it comes out next month. For a preview, you can look at a technical report describing their results. And don't forget to look at the appendices, where Collberg describes his ultimately unsuccessful attempt to get code for a system that interested him.

Now it's true that there are many reasons (which Collberg et al. detail) why this state of affairs exist. Many software papers are written by teams, including graduate students that come and go. Sometimes they are not adequately archived, and disk crashes can result in losses. Sometimes the current system has been greatly modified from what's in the paper, and nobody saved the old one. Sometimes systems ran under older operating systems but not the new ones. Sometimes code is "fragile" and not suitable for distribution without a great deal of extra work which the authors don't want to do.

So in their recommendations Collberg et al. don't demand that every such paper provide working code when it is submitted. Instead, they suggest a much more modest goal: that at the time of submission to conferences and journals, authors mention what the state of their code is. More precisely, they advocate that "every article be required to specify the level of reproducibility a reader or reviewer should expect". This information can include a permanent e-mail contact (probably of the senior researcher), a website from which the code can be downloaded (if that is envisioned), the degree to which the code is proprietary, availability of benchmarks, and so forth.

Collberg tells me that as a result of his paper, he is now "the most hated man in computer science". That is not the way it should be. His suggestions are well-thought-out and reasonable. They should be adopted right away.

P. S. Ironically, some folks at Brown are now attempting to reproduce Collberg's study. There are many that take issue with specific evaluations in the paper. I hope this doesn't detract from Collberg's recommendations.

Tuesday, February 09, 2016

More Silly Philosopher Tricks


Here's a review of four books about science in the New York Times. You already know the review is going to be shallow and uninformed because it is written not by a scientist or even a science writer, but by James Ryerson. Ryerson is more interested in philosophy and law than science; he has an undergraduate degree from Amherst, and apparently no advanced scientific training.

In the review he discusses a new book by James W. Jones entitled Can Science Explain Religion? and says,

"If presented with this argument, Jones imagines, we would surely make several objections: that the origin of a belief entails nothing about its truth or falsity (if you learn that the earth is round from your drunk uncle, that doesn’t mean it’s not)..."

Now I can't tell if this is Jones or Ryerson speaking, but either way it illustrates the difference between the way philosophers think and the way everyone else thinks. For normal people who live in a physical world, where conclusions are nearly always based on partial information, the origin of a belief does and should impact your evaluation of its truth.

For example, I am being perfectly reasonable when I have a priori doubts about anything that Ted Cruz says, because of his established record for lying: only 20% of his statements were evaluated as "true" or "mostly true". Is it logically possible that Cruz could tell the truth? Sure. It's also logically possible that monkeys could fly out of James Ryerson's ass, but I wouldn't be required to believe it if he said they did.

For non-philosophers, when we evaluate statements, things like a reputation for veracity of the speaker are important, as are evidence, the Dunning-Kruger effect, the funding of the person making the statement, and so forth. Logic alone does not rule in an uncertain world; in the real world these things matter. So when a religion professor and Episcopal priest like Jones writes a book about science, I am not particularly optimistic he will have anything interesting to say. And I can be pretty confident I know his biases ahead of time. The same goes for staff editors of the New York Times without scientific training.

Friday, February 05, 2016

3.37 Degrees of Separation


This is pretty interesting: Facebook has a tool that estimates the average number of intermediate people needed to link you, via the shortest path, to anyone else on Facebook. Mine is 3.37, which means the average path length (number of links) to me is 4.37, or that the average number of people in a shortest chain connecting others with me (including me and the person at the end) is 5.37.

What's yours?

An interesting aspect of this is that they use the Flajolet-Martin algorithm to estimate the path length. The paper of Flajolet-Martin deals with a certain correction factor φ, which is defined as follows: φ = 2 eγ α-1, where γ = 0.57721... is Euler's constant and α is the constant Πn ≥ 1 (2n/(2n+1))(-1)t(n), where t(n) is the Thue-Morse sequence, the sequence that counts the parity of the number of 1's in the binary expansion of n.

The Thue-Morse sequence has long been a favorite of mine, and Allouche and I wrote a survey paper about it some time ago, where we mentioned the Flajolet-Martin formula. The Thue-Morse sequence comes up in many different areas of mathematics and computer science. And we also wrote a paper about a constant very similar to α: it is Πn ≥ 0 ((2n+1)/(2n+2))(-1)t(n). Believe it or not, it is possible to evaluate this constant in closed form: it is equal to 2 !

By contrast, nobody knows a similar simple evaluation for α. In fact, I have offered $50 for a proof that α is irrational or transcendental.

Friday, January 29, 2016

Yet More Bad Creationist Mathematics


It's not just biology that creationists resolutely refuse to understand. Their willful ignorance extends to many other fields. Take mathematics, for example.

At the creationist blog Uncommon Descent we have longtime columnist "kairosfocus" (Gordon Mullings) claiming that "a set of integers that spans to infinity will have members that are transfinite", showing that he doesn't understand even the most basic things about the natural numbers.

And we also have Jonathan Bartlett asking "can you develop an effective procedure for checking proofs? and answering "The answer is, strangely, no."

Actually the answer is "yes". A mathematical proof can indeed be checked and easily so (in principle). This has nothing to do with the statement of Bartlett that follows it: "It turns out that there are true facts that cannot be proved via mechanical means." Yes, that's so; but it has nothing to do with an effective procedure for checking proofs. Such a procedure would simply verify that each line follows from the previous one by an application of the axioms.

If a statement S has a proof, there is a semi-algorithm that will even produce the proof: simply enumerate all proofs in order of length and check whether each one is a proof of S. The problem arises when a true statement simply does not have a proof. It has nothing to do with checking a given proof.

Can't creationists even get the most basic things correct?

Saturday, January 09, 2016

Our Car's Fibonacci Odometer


Been waiting for this for 11 years, and it finally happened!

Saturday, January 02, 2016

You Don't Have to Be a Sociopath to Become a Theist....


...but apparently it helps, at least judging from this video.

Several things come to mind when I watched this. First, if David Wood's story is largely true, then he's clearly a sociopath and why should we believe anything he says? He could just be manipulating us for some sick purpose. On the other hand, if his story is largely false, then he's clearly a pathological liar, and why should we believe anything he says? Of course, his story could be partly true and partly false (my guess), but then the same conclusion holds.

Second is how persuasive even a terrible design argument like the one proposed here can be for a diseased or weak mind. Don't bother studying any mathematics, or computer science, or biology. Just assert that there is no evidence for the scientific world view, and voilà!

Third is what an ignorant bastard the guy is for someone who thought he was the greatest person in the world. He thinks shingles are caused by vitamin deficiency, fer chrissake!

Oh well. I am comforted by the fact that there's lots of decent people who are religionists. They're not all sociopaths like David Wood.

Monday, December 21, 2015

10th Blogiversary!


Ten years ago, this blog, Recursivity, was born.

I've had a lot of fun with it, even though I never really had very much time to devote to it. A thousand posts in ten years sounds like a lot, but I wish I could have written a thousand more.

Generally speaking, my readers have been great. In ten years, I think I only had to ban two or three commenters, including one Holocaust denier. Thank you to everyone who read what I had to say, and even more thanks to those who took the time to comment.

Here are 25 of my favorite posts from the last ten years:

  1. Why We Never Lied to Our Kids About Santa: my absolute favorite, and still appropriate. You can criticize atheism and religion, but if you really want to get a reaction, just criticize the myth of Santa Claus.
  2. Robert J. Marks II refuses to answer a simple question: still waiting, more than a year later.
  3. Hell would be having to listen to Francis Spufford: Damn, he was boring.
  4. By the Usual Compactness Argument: for mathematicians only.
  5. Ten Common LaTeX Errors
  6. I defend a conservative politician's right to speak on campus
  7. Science books have errata. Holy books don't
  8. No Formula for the Prime Numbers?: Debunking a common assertion.
  9. In Memory of Sheng Yu (1950-2012): my colleague - I still miss him.
  10. Another Fake Magnet Man Scams AP
  11. William Lane Craig Does Mathematics
  12. Why Do William Lane Craig's Views Merit Respect?: Nobody gave a good answer, by the way!
  13. Stephen Meyer's Bogus Information Theory
  14. Religion Makes Smart People Stupid
  15. Test Your Knowledge of Information Theory
  16. David Berlinski, King of Poseurs
  17. Graeme MacQueen at the 9/11 Denier Evening
  18. Mathematics in a Jack Reacher novel
  19. The Prime Game: This appeared in my 2008 textbook, too.
  20. Debunking Crystal Healing
  21. Nancy Pearcey, The Creationists' Miss Information
  22. Academic Vanity Scams
  23. Time Travel: my second favorite, which nobody seemed to like that much.
  24. Janis Ian Demo Tape: the best part was that Janis Ian herself stopped by to comment!
  25. The Subversive Skepticism of Scooby Doo: my third favorite.
Happy Holidays to everyone, and may 2016 be a great year for you.

Sunday, December 20, 2015

Merry Kitzmas!


It's been ten years since the landmark decision of Kitzmiller v. Dover was handed down, the case that exposed the religious fraud of that absurd pseudoscience, "intelligent design". The ID movement, and especially its "think tank", the Discovery Institute, has never recovered.

I had the honor of meeting the lead plaintiff, Tammy Kitzmiller, a few years ago at one of the trial reunion parties. (I played an extremely minor role in the case, meeting with the lawyers for the plaintiffs and preparing as a possible rebuttal witness, but I never appeared in court because one person on the other side never testified.) A more pleasant and modest (yet determined!) person you can't imagine. In fact, all the people involved in the case in various ways, including Eric Rothschild, Nick Matzke, Steve Harvey, Kenneth Miller, Wes Elsberry, Genie Scott, and Lauri Lebo are about the nicest and most interesting people I've ever met. The contrast with the other side couldn't be more stark.

Ten years later, what's happening? Well, the Discovery Institute and their friends continue to churn out lies pretty much unabated, but nobody's listening any more. Even the "academic wing" of intelligent design seems to have given up. Bill Dembski just threw in the towel. At the same time Casey Luskin tries to boast about all the scientific work published by ID advocates, the flagship scientific journal for the movement has only published a single paper in calendar year 2015, despite getting a new editor and having an editorial board with 29 members.

There's just so long you can keep up this charade.

Meanwhile, the same nasty, deplorable tactics that renamed the Discovery Institute the "Dishonesty Institute" continue unabated. When one of the Kitzmiller team recently got a paper accepted to Science, one of the world's most prestigious scientific journals, all the Discovery Institute (and their slavering friends) could do is make ridiculous and groundless insinuations about misconduct. Truly, they have no shame at all.

Why do the ID folks behave so reprehensibly, over and over again? Of course, it has nothing at all do with science. They behave this way because they are motivated solely by their conservative religious beliefs. Recently a window opened on the ID world view, when one creationist was so disillusioned by their behavior that he posted a private e-mail message from ID advocate Barry Arrington that clearly revealed their motives. Arrington wrote

"We are in a war. That is not a metaphor. We are fighting a war for the soul of Western Civilization, and we are losing, badly. In the summer of 2015 we find ourselves in a position very similar to Great Britain’s position 75 years ago in the summer of 1940 – alone, demoralized, and besieged on all sides by a great darkness that constitutes an existential threat to freedom, justice and even rationality itself."

When you view your opponents this way, then no tactic is off limits. Lying is permissible because otherwise the "great darkness" will win. Insulting, making insinuations, likening your opponents to Nazis or Communists or fascists are all perfectly fine tactics, because your opponents constitute "an existential threat to freedom" and "justice". Treating your opponents as subhuman is ok, because after all, they threaten "rationality itself". And of course, they never, ever, admit they were wrong about anything.

I feel sorry for the ID folks today, I really do. Ten years after Kitzmiller, ID advocates are like the Millerites on October 23 1844, when their predicted triumphal ascent into heaven didn't happen. They are wandering around feeling puzzled and alone, and it's natural that they will lash out against any available target in an effort to cure their misery. It won't work, I'm afraid. Intelligent design is, for all practical purposes dead. Prop up the corpse all you want -- it won't work.

Meanwhile, science and evolutionary theory continue unabated. Those of us who enjoy and respect science (and there are lots!) continue to think about and solve interesting problems. The joy of discovery is genuine for us. May you find it, too.

So, to you and yours, I wish you a very merry Kitzmas.

Friday, December 18, 2015

Polish Immigrants a German Problem?


Only if the immigrants are moose.

Wednesday, December 16, 2015

1000 citations for "Automatic Sequences"


I don't normally like to use this blog to advertise my own achievements, but this one was too fun to pass up.

Back in 2003, Jean-Paul Allouche and I published a book, Automatic Sequences, with Cambridge University Press. There's some info about the book, including errata and some reviews, here.

Just this week, our book reached a milestone we could not have anticipated 12 years ago: 1000 citations on google scholar:

It is our most cited work.

Of course, this number is a bit misleading, since it double-counts papers on the arxiv that later appear in conferences and journals. Nevertheless, I'm really pleased that our book (despite its defects) has been so useful and influential. And thanks to my great co-author, Jean-Paul Allouche!

Saturday, December 12, 2015

The Difference Between Republicans and Democrats


Here is a chart that illustrates, better than anything I've seen, the fundamental difference between the two US political parties.

It wasn't always this way. There were lots of honest Republicans when I was growing up, from Philadelphia's Thacher Longstreth, to Millicent Fenwick, to Gerald Ford. Now I can hardly think of a single one, with Bob Ehrlich (former governor of Maryland) being an exception.

Today's Republican party is home to the craziest, most extreme, lying lunatics ever assembled in one place.

And it's no coincidence that of all the lying liars that represent the party today, the biggest lying liar of them all is Ben Carson -- who is also a favorite of the creationists.

To today's Republicans and creationists alike, the truth means absolutely nothing.

Discovery Institute Lies Again!


Oh, look! The Dishonesty Discovery Institute is out with yet another video. What a thrill.

Don't bother watching it, though. There are no new arguments at all. It's just the same lies as usual, repackaged for the nth time. You wonder why their spokesmen don't get just a little bit bored repeating the same misinformation, practically verbatim, over and over. It's more like they are evangelical hucksters than scientists. Why would that be?

  • Citing the 1966 Wistar Institute Symposium -- and pretending it was an important and influential scientific meeting, when in fact it had basically no influence on biology at all. And then pretending that the questions raised haven't been answered since then. Misrepresenting the symposium has a long history among creationists.
  • Doug Axe citing his 11-year-old 10-77 claim, long debunked. As far as I can see, Dougie hasn't gotten anything published in a real scientific journal since 2008. All his recent publications seem to be in the intelligent design vanity journal Bio-Complexity or similar crappy venues. I wonder why the Ahmansons continue to fund this embarrassment.
  • Stephen Meyer repeating his lie once again that "Whenever we see information, especially when we find information in a digital or typographic form and we trace it back to its ultimate source, we always come to a mind, not a material process." Of course, that's not true. The environment is full of information, or how would we able to do weather prediction? (I don't buy the implicit claim that the mind is not a material process, either.)
Intelligent design is so over. Get a life, guys. Do some actual work, or give up and stop pretending this charade is real science.

Friday, December 11, 2015

More Sprinkler Moose


Devoted readers of this blog (are there any?) will recall this post from 2008 with baby moose playing in a lawn sprinkler.

Apparently it's a thing, now. Here's a new video.

Hat tip: R. M.

Monday, December 07, 2015

Another Philosophy Fail


This article by Notre Dame philosopher Gary Gutting is interesting, but not in the way that Prof. Gutting seems to think. It's interesting because it demonstrates the intellectual bankruptcy and uselessness of the kind of philosophy that a lot of academics do.

Gutting presents the cosmological argument for the existence of a god, and seems to think it deserves to be taken seriously.

I say, it doesn't. Not only that, the fact that a well-respected philosopher thinks it does, and gets it published by a well-respected publisher like W. W. Norton, demonstrates that something is terribly, terribly wrong with much of academic philosophy.

Here, briefly, are just a few things that I think are wrong.

1. Gutting never defines "cause" or "caused". The words are very difficult to make rigorous, which is one reason why if you pick up a textbook on physics (say, Halliday and Resnick, the book I learned physics from), you won't even find them in the index (although of course the words themselves occur in the text). We sort-of-understand the colloquial and loose meaning of "cause" when it is associated with the events that are common in our lives, such as car accidents and elections and hot plates and Thanksgiving turkeys, but what guarantee is there that this understanding can be extrapolated to events on the micro or macro scales that physics deals with? Gutting seems to think that our folk understanding of these words is enough. I say it isn't.

2. After having acknowledged the looseness of the words, it nevertheless does seem that in nature there are genuinely uncaused physical events (like the radioactive decay of a particular uranium atom). Gutting doesn't even mention this possibility, except when it comes to his magical "first cause". So if events like the decay of this particular uranium atom has no explanation, why should we be so confident that all other kinds of physical events actually have explanations? This exemplifies another feature of much of academic philosophy, which is that it seems almost entirely divorced from what we have actually learned about the physical world. He is basically arguing to a Middle Ages audience (or even earlier).

3. There is no really good reason to always dismiss an infinite regress of causes, nor is there a good reason to dismiss a circular chain of causes (e.g., A causes B, which causes A). Of course, these don't seem to happen much in our daily lives, but again, we are talking about events (the creation of the universe) which are wildly different in scale from our ordinary experience. We don't experience cosmic inflation in our daily life, either, but that's not a good reason to dismiss that physical theory.

4. Gutting's description of "contingent" and "contingency" suffers from the same defects as "cause" and "caused". What does it mean to say "Germany might not have won the 2014 World Cup"? After all, possibly the universe was created by a supreme being who has an inexplicable love for German soccer. Perhaps everything was created and set in motion deterministically by a supreme being just so Germany won the 2014 World Cup and no other outcome was possible, even in principle. And just because I can imagine a different outcome doesn't mean a different outcome is possible; if I try very, very hard I can just barely imagine a square circle or a good philosopher, but that doesn't necessarily mean those things are possible.

5. Finally, I think what's wrong with this reasoning like Gutting's is that it is a kind of pseudomathematics: applying precise logical rules to vague concepts like "explanation" and "contingency" and "cause" without providing a rigorous mathematical or physical basis for those concepts, and then expecting the results to be meaningful. When you do that, it's kind of like doing a physics experiment and reporting the results to 20 significant figures when your measuring devices only provide 3 significant figures. You run the risk of thinking you're being precise and logical, when in fact you've only extrapolated your vague and inchoate understanding of what's really going on.

I realize in making these complaints I'm in a tiny minority. Nevertheless, I think my objections have at least some validity. What do you think?