Saturday, December 20, 2014

Silly Preview Bug


Here's a silly Preview bug and a work-around. If you know a better work-around, let me know.

Preview is a document viewer and editor for the Mac. One thing it allows is "annotation"; that is, you can take an existing .jpg file (for example) and add things like arrows, rectangles, and so forth. I use the annotation for, among other things, putting red rectangles around parts of articles or papers I'm interested in.

However, it has the following silly bug. If you use Preview to look at a grayscale JPG, and then annotate it by adding color in any form (say, a red rectangle), you'll see the red rectangle as you edit it. However, if you re-open the changed file after saving, you'll find (to your surprise and dismay) that the red rectangle has magically become gray. Apparently Preview is not smart enough to understand that if you add color to a grayscale JPG, you want to save it as color JPG.

I couldn't figure out any way at all to get Preview to behave in the way I expected, and a web search didn't produce any suggestions to fix the problem. So here's a kludge to solve the problem. In a Terminal window, type a line like the following:

convert inputfile.jpg -colorspace HSL outputfile.jpg

This uses ImageMagick to change the file, which you can then use Preview to annotate and get the expected behavior. Oddly enough, for some reason I don't understand, using "RGB" in place of "HSL" doesn't work.

Friday, December 19, 2014

Groundless Annual Ritual of ID Self-Congratulation


As each year draws to a close, we can expect being treated to the annual ritual of self-congratulation by intelligent design advocates. Why, they have accomplished so much in the last year! The movement is simply overflowing with ideas! And honest, god-fearing people! And real scientists! And publishing successes! Not at all like those dogmatic, liberal, communistic, intolerant, censoring, Nazi-like evolutionists!

2014 is no different. Here we have the DI's official clown, David Klinghoffer, comparing himself to Leon Wieseltier (in part because, he says, their surnames sound similar -- I kid you not) and the Discovery Institute to The New Republic.

Actually, there are two big similarities I can think of: when TNR tried to come up with a list of 100 "thinkers" whose achievements were most in line with things that TNR cares about, science didn't even merit its own category. But theology did! And TNR's Wieseltier wrote a review of Nagel's book that demonstrated he didn't have the vaguest understanding of why Mind and Cosmos was nearly universally panned. Wieseltier even adopted intelligent design tropes like "Darwinist mob", "Darwinist dittoheads", "bargain-basement atheism", "mob of materialists", "free-thinking inquisitors", "Sacred Congregation for the Doctrine of the Secular Faith", and "scientistic tyranny". Don't let the door hit you on your way out, Leon.

Klinghoffer claims "In the evolution controversy, it's supporters of intelligent design who stand for ideas (disagree with us or not) and idealism." Well, that's something that we can actually check. Since ID is so brimming with ideas, let's look at ID's flagship journal, Bio-Complexity, and see how many papers were published this year. ID supporters are always complaining about how their groundbreaking word is censored by evil Darwinists. If true (it's not), then in Bio-Complexity they have no grounds for complaints: nearly all of the 32 people listed on the "Editorial Team" are well-known creationists and hence automatically friendly to any submission.

How many papers did Bio-Complexity manage to publish this year? A grand total of four! Why, that's 1/8th of a paper per member of the editorial team. By any measure, this is simply astounding productivity. They can be proud of how much they have added to the world's knowledge!

Looking a little deeper, we see that of these four, only one is labeled as a "research article". Two are "critical reviews" and one is a "critical focus". And of these four stellar contributions, one has 2 out of the 3 authors on the editorial team, two are written by members of the editorial team, leaving only one contribution having no one on the editorial team. And that one is written by Winston Ewert, who is a "senior researcher" at Robert J. Marks II's "evolutionary informatics lab". In other words, with all the ideas that ID supporters are brimming with, they couldn't manage to publish a single article by anyone not on the editorial team or directly associated with the editors.

What happened to the claim that ID creationists stand for ideas? One research article a year is not that impressive. Where are all those ideas Klinghoffer was raving about? Why can't their own flagship journal manage to publish any of them?

As 2015 draws near, don't expect that we will get any answers to these questions. Heck, not even the illustrious Robert J. Marks II can manage to respond to a simple question about information theory.

Tuesday, December 09, 2014

The Robert J. Marks II Information Theory Watch, Three Months Later


Three months ago I wrote to the illustrious Robert Marks II about a claim he made, that "we all agree that a picture of Mount Rushmore with the busts of four US Presidents contains more information than a picture of Mount Fuji".

Since I don't actually agree with that, I asked Professor Marks for some justification. He did not reply.

Now, three months later, I'm sending him another reminder.

Who thinks that he will ever send me a calculation justifying his claim?

Sunday, December 07, 2014

How Religion Rots Your Brain, Kills You, and Abandons Your Corpse to Rats


From Hamilton, Ontario comes this story of a woman so besotted with religion that she failed to encourage her ailing husband to get medical help and then, after he died, left his corpse to rot for months in a sealed bedroom while it was eaten by rats.

In the meantime, she was praying for a miraculous resurrection.

When, six months later, no miracle occurred, did she rethink her beliefs? No, of course not. She believes more strongly than ever, and is quoted as saying "In fact, it has cast me more at the mercy of God, because he is the ultimate judge."

If there's a better local example of how religion can warp your brain, I don't know it. Why we continue, as a society, to coddle religious believers and treat religion as a positive force is beyond me.

Wednesday, November 26, 2014

Barry Arrington: A Walking Dunning-Kruger Effect


The wonderful thing about lawyer and CPA Barry Arrington taking over the ID creationist blog, Uncommon Descent, is that he's so completely clueless about nearly everything. He truly is the gift that keeps on giving.

For example, here Barry claims, "Kolmogorov complexity is a measure of randomness (i.e., probability). Don’t believe me? Just ask your buddy Jeffrey Shallit (see here)".

Barry doesn't have even a glimmer about why he's completely wrong. In contrast to Shannon, Kolmogorov complexity is a completely probability-free theory of information. That is, in fact, its virtue: it assigns a measure of complexity that is independent of a probability distribution. It makes no sense at all to say Kolmogorov is a "measure of randomness (i.e., probability)". You can define a certain probability measure based on Kolmogorov complexity, but that's another matter entirely.

But that's Barry's M. O.: spout nonsense, never admit he's wrong, claim victory, and ban dissenters. I'm guessing he'll apply the same strategy here. If there's any better example of how a religion-addled mind works, I don't know one.

Sunday, November 23, 2014

The "Call for Pagers"


This is an actual solicitation I just received:

ijcsiet journal call for pagers november 2014

International Journal of Computer Science Information and Engg., Technologies

INVITATION FOR SUB PAPERS

!
Dear Authors,

We would like to invite you to submit quality Research Papers by e-mail [email protected] or [email protected] or both. The International Journal of Computer Science Infor! mation and Engineering Technologies (IJCSIET).. As per the guidelines submit a research paper related to one of the themes of the Journals, as per the guidelines.

These solicitations seem to be competing to see who can have the stupidest-named journal and the most typographical errors in a single message.

Monday, November 17, 2014

It Takes One


So David Berlinski thinks climate scientists are "intellectual mediocrities and pious charlatans".

Well, he should know, since I suspect he might hold Official Membership Cards in both groups.

If you want to see intellectual mediocrity and charlatanism, just read Berlinski's essay "Gödel's question" in the creationist collection Mere Creation, published by that well-recognized press devoted to research in advanced mathematics and science, InterVarsity Press.

If you can't figure out what is wrong with it, you can read Jason Rosenhouse's takedown.

Saturday, November 15, 2014

Kirk Durston Does Mathematics!


Kirk Durston is a local evangelical Christian who likes to construct unconvincing arguments for his faith. Every few years he trots them out at my university.

Here is one of his more recent attempts, a discussion of infinity. Not surprisingly, it is a confused mess.

Durston's argument is based, in part, on a distinction that does not really exist: between "potential infinity" and "completed infinity" or "actual infinity". This is a distinction that some philosophers love to talk about, but mathematicians generally do not.* You can open any contemporary mathematical textbook about set theory, for example, and not find these terms mentioned anywhere. Why is this? It's because mathematicians understand the subject well, but -- as usual -- many philosophers are extremely muddled thinkers when it comes to infinity.

Here is how Durston defines "potential infinity": "a procedure that gets closer and closer to, but never quite reaches, an infinite end". So, according to Durston, a "potential infinity" is not a set but a "procedure". Yet the very first example that Durston gives is "the sequence of numbers 1,2,3, ... gets higher and higher but it has no end". The problem should be clear: a "sequence" is not a "procedure"; a (one-sided infinite) sequence over a set S is a mapping from the non-negative integers to S. From the beginning, Durston is quite confused. His next example is "the limit of a function as x approaches infinity". But a "limit" is not a "procedure", either. Durston also doesn't seem to understand that limits involving the symbol ∞ can be restated to avoid it entirely; the ∞ in a limit is a shorthand that has little to do with infinite sets at all.

He defines "actual infinity" or "completed infinity" as "an infinity that one actually reaches", which doesn't seem to have any actual meaning that I can divine. But then he says that "actual infinity" or "completed infinity" is "just one object, a set". Fair enough. Now we know that for Durston, an "actual" or "completed" infinity is a set. But what does it mean for a set to "reach" something? And if we consider the set of natural numbers, for example, what does it mean to say that it "reaches" infinity? After all, the set of natural numbers N contains no number called "infinity", so if anything, we should say that N does not "reach" infinity.

But then he goes on to say "First, a completed countable infinity must be treated as a single ‘object’." This is evidently wrong. For Durston, a "completed infinity" is a set, but that doesn't prevent us from discussing, treating, or thinking about its members, and there are infinitely many of them.

Next, he says "it is impossible to count to a completed infinity". That is true, but not for the reason that Durston thinks. It is because the phrase "to count to a set" is not defined. We never speak about "counting to a set" in mathematics. We might speak about enumerating the elements of a set, but then the claim that if we begin at a specific time and enumerate the elements of a countably infinite set at, say, once a second, we will never finish, is completely obvious and not of any interest.

Next, Durston claims "one can count towards a potential infinity". But since he defined a "potential infinity" as a procedure, this is clearly meaningless. What could it mean to "count towards a procedure"?

He then goes on to discuss four requirements of an infinite past history. He first asserts that "the number of seconds in the past is a completed countable infinity". Once again, Durston bumps up against his own claims. The number of seconds is not a set, and hence it cannot be a "completed infinity" by Durston's own definition. Here he is confusing the cardinality of a set with the set itself.

Next, he claims that "The number of elapsed seconds in the future is a potential infinity". But earlier he claimed that a potential infinity is a "procedure". Here he is confusing a cardinality with a procedure!

Later, Durston shows that he does not understand the difference between finite and infinite quantities: he claims that "the size of past history is equal to the absolute value of the smallest negative integer value in past history". This would only be true for finite pasts. If the past is infinite, there is no smallest negative integer, so the claim becomes meaningless. So his Argument A is wrong from the start.

At this point I think we can stop. Durston's claims are evidently so confused that one cannot take them seriously. If one wants to understand infinity well, one should read a basic text on infinity and set theory by mathematicians, not agenda-driven religionists with little advanced training in mathematics.

* There are certainly some exceptions to this general rule. The "actual"/"potential" discussion started with Aristotle and hence continues to wield influence, even though mathematicians have had a really good understanding of the infinite since Cantor. Cantor met with resistance from some mathematicians like Poincaré, but today these objections are generally regarded as groundless.

Friday, November 14, 2014

Yet Another Dubious Journal Solicitation


This one was spammed to almost everybody in our School of Computer Science here at Waterloo on Wednesday:

Dear Dr. ,

Greetings from the Journal of Advances in Robotics and Automation!

Hope you are doing well!

The Journal is in need of your fortitude. We would like to invite you to send us your valuable contribution (research article, review article, Opinion article, Editorial or short communication), to publish in our journal and improve it for indexing.

It would be highly appreciable if you could submit the article before or till 30 th November. You can visit our journal website for any details http://omicsgroup.org/journals/advances-in-robotics-automation.php

Please submit the article to the below link https://www.editorialmanager.com/engineeringjournals/default.asp or you can mail to the below link.

Please help in this Regard.

With Regards,

Rachle Green

Editorial Assistant

E-mail: [email protected]

All the warning signs are there:

1. Spam sent to everybody without discrimination, including those (like me) who have nothing to do with robotics or automation.

2. Bizarre capitalization like "Regard".

3. Bizarre word choice like "fortitude" and "highly appreciable".

4. Ridiculously rapid deadline for submission.

5. A likely bogus name for the "Editorial Assistant". The first name "Rachle" is extremely uncommon.

I do not recommend having anything to do with this journal.

Wednesday, November 12, 2014

Alan Cobham: An Appreciation


This year, 2014, marks the 50th anniversary of a talk by Alan Cobham that is often regarded as the birth of the class P, the class of polynomial-time solvable problems.

Cobham's invited half-hour talk took place during the Congress for Logic, Methodology and Philosophy of Science at the Hebrew University of Jerusalem, which was held from August 26 to September 2 1964. His paper, which he delivered on the last morning of the conference (September 2), was entitled "The intrinsic computational difficulty of functions". It later appeared in the proceedings of that conference [1], which were published in 1965.

The paper introduces a number of fundamental ideas and questions that continue to drive computational complexity theory today. For example, Cobham starts by asking "is it harder to multiply than to add?", a question for which we still do not have a satisfactory answer, 50 years later. Clearly we can add two n-bit numbers in O(n) time, but it is still not known whether it is possible to multiply in linear time. The best algorithm currently known is due to Fürer, and runs in n(log n)2O(log* n) time.

Cobham then goes on to point out the distinction between the complexity of a problem and the running time of a particular algorithm to solve that problem (a distinction that many students still don't appreciate).

Later in the paper, Cobham points out that many familiar functions, such as addition, multiplication, division, square roots, and so forth can all be computed in time "bounded by a polynomial in the lengths of the numbers involved". He suggests we consider the class "ℒ", of all functions having this property. Today we would call this class P. (Actually, P is usually considered to consist only of the {0,1}-valued functions, but this is a minor distinction.)

He then goes on to discuss why P is a natural class. The reasons he gives are the same ones I give students today: first, that the definition is invariant under the choice of computing model. Turing machines, RAMs, and familiar programming languages have the property that if a problem is in P for one such model, then it is in P for all the others. (Today, though, we have to add an asterisk, because the quantum computing model offers several problems in BQP (such as integer factorization) for which no polynomial-time solution is known in any reasonable classical model.)

A second reason, Cobham observes, is that P has "several natural closure properties" such as being "closed in particular under ... composition" (if f and g are polynomial-time computable, then so is their composition f ∘ g).

He then mentions the problem of computing f(n), the n'th prime function, and asks if it is in P. Fifty years later, we still do not know the answer; the fastest algorithm known runs in O(n½+ε), which is exponential in log n.

He concludes the paper by saying that the problems he mentioned in his talk are "fundamental" and their "resolution may well call for considerable patience and discrimination" --- very prescient, indeed.

Like many scientific ideas, some of the ideas underlying the definition of the class P appeared earlier in several places. For example, in a 1910 paper [2], the mathematician H. C. Pocklington discussed two different algorithms for solving a quadratic congruence, and drew a distinction between their running times, as follows: "the labour required here is proportional to a power of the logarithm of the modulus, not to the modulus itself or its square root as in the indirect process, and hence see that in the case of a large modulus the direct process will be much quicker than the indirect."

In 1956, a letter from Gödel to von Neumann discussed the possibility that proofs of assertions could be carried in linear or quadratic time and asked specifically about the number of steps required to test if a number n is prime. Today we know that primality can indeed be decided in polynomial time.

About the same time, Waterloo's own Jack Edmonds was considering the same kinds of ideas. In a 1965 paper [3], he drew a distinction between algorithms that "increases in difficulty exponentially with the size of the" input and those whose "difficulty increases only algebraically". He raised, in particular, the graph isomorphism problem, whose complexity is still unsolved today.

For some reason I don't understand, Cobham never got much recognition for his ideas about P. (Neither did Edmonds.) Stephen Cook, in a review of Cobham's paper, wrote "This is perhaps the best general discussion in print" of the subject. But, as far as I know, Cobham never got any kind of award or prize.

Cobham did other fundamental work. For example, a 1954 paper in the Journal of the Operations Research Society on wait times in queues has over 400 citations in Google Scholar. In two papers published in 1969 and 1972, respectively [4,5], he introduced the notion of "automatic sequence" (that is, a sequence over a finite alphabet computable, given the base-k expansion of n, using a finite automaton) and proved most of the really fundamental properties of these sequences. And in a 1968 technical report [6] he discussed proving transcendence of certain formal power series, although his proofs were not completely correct.

Alan Belmont Cobham was born on November 4 1927, in San Francisco, California. His parents were Morris Emin Cobham (aka Emin Maurice Cobham) (October 2 1888 - February 1973), a silk merchant, and Ethel Carolina Rundquist (June 24 1892 - Nov 1977), an artist. He had a older sister, Claire Caroline Cobham (June 18 1924 - November 29 2000), who worked for Boehringer-Ingelheim Pharmaceuticals. In the 1940 census, Alan was living in Palm Beach, Florida, where his father was a hotel manager.

Cobham's parents in 1920.

Sometime between 1940 and 1945, Alan's family moved to the Bronx, where Alan attended the Fieldston School. Below is a picture of Alan from the 1945 Fieldston Yearbook.

Alan attended Oberlin College. In the 1948 Oberlin College yearbook, he appears in a photo of the Mathematics Club (below). He is in the front row, 3rd from the right.

Later, Alan transferred to the University of Chicago. He worked for a time in the Operations Evaluation Group of the United States Navy in the early 1950's. He went on to do graduate work at both Berkeley and MIT, although he never got a Ph.D. He also worked at IBM Yorktown Heights from the early 1960's until 1984. One of his achievements at IBM was a computer program, "Playbridge", that was, at the time, one of the best programs in the world for playing bridge; it was profiled in an October 7 1984 article in the New York Times. In the fall of 1984, Alan left IBM and became chair of the fledgling computer science department at Wesleyan University in Middletown, Connecticut, a post which he held until June 30 1988.

I interviewed Alan Cobham in 2010. I was hoping to find out about the reception of his paper in 1964, but unfortunately, he was clearly suffering from some sort of mild dementia or senility, and could not remember any details of his work on P. When I asked him what he did to keep himself busy, he said, "I watch a lot of TV."

Alan passed away in Middletown, Connecticut, on June 28 2011. As far as I can tell, he never married, nor did he have any children.

References

[1] A. Cobham, The intrinsic computational difficulty of functions, in Y. Bar-Hillel, ed., Logic, Methodology and Philosophy of Science: Proceedings of the 1964 International Congress, North-Holland Publishing Company, Amsterdam, 1965, pp. 24-30.

[2] H. C. Pocklington, The determination of the exponent to which a number belongs, the practical solution of certain congruences, and the law of quadratic reciprocity, Proc. Cambridge Phil. Soc. 16 (1910), 1-5.

[3] J. Edmonds, Paths, trees, and flowers, Canad. J. Math. 17 (1965), 449-467.

[4] A. Cobham, On the base-dependence of sets of numbers recognizable by finite automata, Math. Systems Theory 3 (1969), 186-192.

[5] A. Cobbham, Uniform tag sequences, Math. Systems Theory 6 (1972), 164-192.

[6] A. Cobham, A proof of transcendence based on functional equations, IBM Yorktown Heights, Technical report RC-2041, March 25 1968.

This is a draft of an article I am preparing. I would appreciate feedback and more information, if you have it, about Alan Cobham.

Tuesday, November 11, 2014

Mormon Church Leaders Come Clean? Hardly


A new article in the New York Times suggests that the Mormon Church is suddenly becoming transparent about the more ridiculous and appalling aspects of its history.

As evidence they point to an essay, published on the Church's website, admitting that the Church's founder, Joseph Smith, had as many as 40 wives.

Well, I suppose it's a start, but the reporter is far too generous to the Church. I wonder if we can hope to see some forthright admission that Joseph Smith was a known and convicted conman; that some of the Egyptian documents he claimed to have "translated" are not even remotely what he claimed; that large sections of Mormon holy texts are evidently plagiarized; that DNA evidence clearly shows that the Church's claims about native Americans are without foundation, and so forth.

Nope, we can't. For example, their article on DNA is full of excuses why the definitive results aren't really definitive after all.

Meanwhile, more and more people are leaving the Mormon Church because they can't get honest answers to their questions.

Mormon beliefs, like much of Christianity, are without foundation. The big difference is that Mormonism makes lots of claims that are subject to clear refutation and that the founding of the religion is so recent that its dubious origins are much easier to study.

Saturday, November 08, 2014

Big Surprise: Wind Turbines Not Evil After All


Your tax dollars at work: Health Canada spent $2.1 million to test the wacky proposition that wind turbines have a negative effect on health.

The result should not surprise anyone who spent a few minutes thinking about it: no ill effects were found.

The only reason why they had to do this study at all is likely due to complaints from wacky wind turbine opponents, such as retired pharmacist Carmen Krogh.

Monday, October 27, 2014

Creationists Desperately Crave Respect - But They Don't Get It


When creationists have an event, they frequently try to have it at a university. The reason why is clear: they crave the academic respectability that a university would give them. If they can say they've had a conference at Cornell -- well, then, there must be something to it, or such a university wouldn't allow them to hold it, right?

If not a university, they can try to have their event at some famous scientific institution, like the Smithsonian. It's a win-win for creationists: if they succeed, they get respectability the famous name gives them; if they don't they can cry "censorship" and make a movie about how poorly they are treated. It feeds the martyrdom scenario that many fundamentalists seem to enjoy.

The latest bogus creationist event is the "Origin Summit", which they managed to hornswoggle Michigan State University into holding. According to Science magazine, subterfuge was used from the very start: "Creation Summit secured a room at the university’s business school through a student religious group, but the student group did not learn about the details of the program—or the sometimes provocative talk titles -- until later, says MSU zoologist Fred Dyer."

Dishonesty and illegitimately seeking academic validation: two of the major characteristics of creationism.

(By the way, if you want to be really appalled, read about Jerry Bergman, one of their illustrious speakers.)

P. S. If you look at this page, you'll see evangelical whack job Lee Strobel described as a "certified agnostic". I wonder where you get certified. Maybe they meant "certifiable" instead?

Thursday, October 23, 2014

I See Berlinskis


Hey, there's a new science journal out. It's called Inference Review.

Sorry, I should have really said that it's a new "science" journal. That's because the weirdness is strong --- very strong --- with this one.

Just look at the first articles they published. One is by Michael Denton, the odd biologist who published an anti-evolution book called Evolution: A Theory in Crisis back in 1985. Needless to say, that book was filled with errors and misunderstandings. That ignorance, however, didn't prevent Denton from acquiring a big fan base among the intelligent design crowd (when does it ever?); Phillip E. Johnson and the laughable George Gilder counted themselves among Denton's fans. And Denton's new Inference Review article is touted by none other than that pathetic ID pipsqueak Casey Luskin.

Demonstrating yet another example of crank magnetism, a second Infererence Review article is by global warming skeptic William Kininmonth.

Are you beginning to get suspicious yet? Who's running the show here?

Well, we don't know. Unlike real science journals, nowhere on the website for Inference Review can one find a listing of the editorial board. I wonder why they want to hide...

A sharp-eyed commenter on a private mailing list points out, however, that the journal's twitter followers include, in addition to a few gullible science journalists, two different Berlinskis: Mischa and the eminently silly Claire Berlinski.

Hmmm. What would possess two Berlinskis to follow an obscure and mysterious "science" journal with intelligent design creationist and global warming denialist leanings?

Read some of the pages and you'll come to the same conclusion I did. That ol' poseur David Berlinski is surely involved somehow. All the signs are there: the Francophilia (why else would the grotesque caricatures be featured?), the pretension, the supercilious turns, the obsession with criticizing evolution, the solicitation for articles on mathematics topics dear to David, and the use of the word "irrefragable"; all are Berlinski hallmarks.

C'mon, David! Don't hide your light under a bushel. Come out into the open.

Monday, October 20, 2014

Most Philosophers Have Nothing Interesting to Say About The Brain


If you want to understand the brain, look to neuroscience. Most philosophers (unless they have some decent neuroscience training*) have simply nothing of interest to say. The reason why is that (a) their speculations were not tied to any physical models and (b) their claims were often so vague or incoherent they could not be verified and (c) when the claims were more precise, there was rarely an effort to prove or disprove through experimentation.

Instead, philosophers gave us time-wasters like the "Chinese room argument" (still taken seriously by some very smart people, which I find astonishing) and the silly and overblown early anti-AI claims of Hubert Dreyfus (who actually got awards for his work).

Of course, it's going to be really hard to understand how the brain works. That's because the immense complications of the brain did not arise through intelligent design -- which would have given us nice discrete subsystems that interact in controlled and efficient ways -- but rather through the rather higgledy-piggledy bricolage of billions of years of evolution.

Nevertheless, we're making some small progress in understanding the brain and the mind and the mind-body problem and perception and memory and awareness and "understanding" and consciousness and free will, and other conundrums that have baffled philosophers for thousands of years. For example, read Crick's The Astonishing Hypothesis. The tools philosophers used -- until recently -- were simply too puny to get anything reasonable done.

When you read a philosopher on the brain or the mind, look for the warning signs. Here's one: do they treat things like "consciousness" and "understanding" as a binary property -- that something either has or doesn't have? Or do they explicitly recognize that these could lie on a continuous (or at least variable) spectrum? If the former, beware.

If reading Crick is too much work, you can also show up today, at the University of Waterloo, to hear my colleague Jeff Orchard speak on "Computing Between Your Ears".

* For philosophers who really do have something to say, look at, for example, the Churchlands.

Saturday, September 27, 2014

Calling all ID Advocates: Employment Awaits


One of the points of my long paper with Elsberry was that intelligent design (ID) advocates talk a big game, but don't actually accomplish anything.

They claim their methods are revolutionary. They claim that all sorts of fields, like archeology and forensic science, use "pre-theoretic" versions of their "design detection" methodology. Yet when it comes to actually applying their methods where they would potentially be useful, what happens?

Crickets chirping...

In fact, in 2003 we published a little paper, "Eight challenges for intelligent design advocates", where we asked ID advocates to prove their silly ideas useful in practice.

Needless to say, not a single ID advocate has come forward with an answer to any of our challenges.

Well, here's another. Recently archaeologists discovered what may be evidence of the earliest sign of humans in what is now Canada.

As the article says, researchers are not completely sure yet. They may have found a stone weir constructed to catch fish, or they may have found a natural, non-human-constructed formation: "A geologist will now study the images to ensure the rocks are not a natural formation..."

Needless to say, there is no sign these researchers are basing their decision on the research oeuvre of William Dembski to decide the question.

But why not? After all, detecting design is what ID advocates say they're really, really good at. Better than all those stupid "materialist" scientists.

So have at it, ID advocates! Volunteer your massive expertise here. Do your investigations. Create your specifications, prove they're independent, tell us what the "rejection region" is, and so forth. Write a paper with your decision about these possible stone weirs. Publish it in the peer-reviewed literature --- you know, a real journal like Science or Nature, not the creationist circle-jerk that is Bio-Complexity. (Try not to be fooled the way Dembski was about the so-called "bible codes".)

What are you waiting for?

Silly Barry. Have a Cookie.


Have you ever had this experience? You're having a technical discussion and someone you don't know well is listening. Then they enter the conversation, and from the first thing they say, you realize they have absolutely no idea what's going on, but they think they do. At this point, the only thing to do is to give the poor fellow a cookie and move on.

Person 1: "So, you see, since the derived subgroup is not supplemented by a proper normal subgroup, it follows that the group is imperfect..."

Very Silly Person: "Wait a second, the rock group Brand New split off from The Rookie Lot, and they're pretty perfect."

Person 2: "Umm... yes... Here, have a cookie and go play outside."

That's why it's usually a waste of time having a discussion with intelligent design (ID) advocates. The vast majority of them have absolutely no idea what people like William Dembski claim; they just know that ID is in agreement with their religious beliefs, so they support it.

I was reminded of this when I saw the response of certified public accountant Barry Arrington to my my post pointing out his misunderstandings.

Now, maybe in some universes certified public accountants are the people to go to when you want to understand the basics of information theory. But not in the one I live in. Christians like Barry go on and on about how humble they are, but when someone takes the time to explain a basic mistake like the one Arrington made, how do they behave? With arrogance and ignorance.

I'll briefly summarize Arrington's mistakes and misrepresentations.

1. "Correction, I [Arrington] routinely ban trolls, who then claim they were banned for dissenting." A lie. All one has to do is look at this page, which has example after example of Mr. Arrington's intolerance of criticism. Then there was the famous agree with the laws of logic or be banned episode. One poster said it best: "The only firm rule at UD seems to be, Thou Shalt Not Make Barry Arrington Feel Inadequate".

2. Showing he can google a phrase just as well as the next fellow, Arrington brings in a quote from Steve Ward about pseudorandom numbers. It has pretty much nothing to do with what we are discussing, but Silly Barry doesn't understand that. Silly Barry could attend my current course CS 341, where we discuss the generation of pseudorandom numbers in Lecture 11.

3. Arrington tries to distract from his mistake by claiming "The issue is whether – as with the CD player in Ward’s illustration – it is random enough for the purposes for which it is employed." No, the issue is, was string #1 more or less random than string #2? Barry implied it was more random. I showed why he was wrong.

4. Arrington says "Shallit believes he has achieved some great triumph of argumentation by demonstrating that the first string is not truly, completely and vigorously random...". Actually, what I showed was that string #1 was actually less random than string #2.

5. "So, according to Shallit’s calculations an excerpt from Hamlet’s soliloquy is “more random” than a string of text achieved by randomly banging away on a keyboard. That is a sentence only a highly educated idiot could have written." Ahh, the traditional ploy of the scientifically illiterate: your conclusion (about evolution, global warming, the roundness of the earth, scientific theory of disease) disagrees with my preconceptions. Therefore you are the idiot! Very Silly People have used this ploy for hundreds of years. So far it's not working so well for them.

6. "The larger point – and here Shallit gives the store away – is his admission that he detected the design of the first string using rigorous statistical methods." Poor Silly Barry. I said nothing about "design" at all; the word doesn't even appear in my post. I didn't say anything about "statistical methods", either. The method I used is based on information theory, not statistics. (But Barry knows little advanced mathematics, it's clear.)

Barry, and all ID advocates, need to understand one basic point. It's one that Wesley Elsberry and I have been harping about for years. Here it is: the opposite of "random" is not "designed".

I'll say it again. Just because an event E is not "random" (more precisely, that it deviates from uniform distribution with equal probabilities) does not mean it was "designed" by some natural or supernatural agent. There are many possible explanations. It could have arisen from a uniform random process with unequal probabilities, like (in the case of string #1) a stochastic process biased towards the letters "a", "s", and "d". It could have arisen from a nonuniform random process like a Markov model. It could have arisen from some deterministic process -- basically, an algorithm -- that could have arisen naturally or with human agency. There are lots of possibilities. Silly Barry doesn't understand that.

7. "Jeffery Shallit has spent years denying the basic formulation of ID: Some patterns are best explained by the act of an intelligent agent." I'll overlook the spelling incompetence of Mr. Arrington (although it is basic politeness to spell a man's name correctly). Nobody denies human agency or the ability of scientists to detect it in many cases. What ID critics point out, though, is that those who detect human agency aren't detecting "Design" with a capital "D". They are detecting artifacts: the characteristic product of human activity. For a good look at why capital-D "Design" is basically a charade, read the fine article of Wilkins and Elsberry published in Biology and Philosophy. When Mr. Arrington gets his Very Silly views published in a philosophy journal, give me a call.

8. Yet here he is yelling from the rooftops: "That first string of text only appears to be random; I have demonstrated rigorously that it was in fact designed." This is a manufactured quote by Mr. Arrington. (Using fake quotes is a disease of ID advocates in general and Mr. Arrington in particular.) I didn't say that at all. I said string #1 was not as random as string #2 (in the sense of being more compressible), and then I guessed how it came to be constructed, a guess that is based on what I know of Mr. Arrington and keyboards and the typical behavior of certified public accountants.

9. "In ID theory the “specification” of a strnig of text, for instance, is closely related to how compressible the description of the string is. In other words, whether a given string of text is “specified” is determined by whether the description of the string can be compressed. Take the second group of text as an example. It can be compressed to “first 12 lines of Hamlet’s soliloquy.” This is simply not possible for the first string. The shortest full description of the first string is nothing less than the string itself."

Once again Barry shows he doesn't understand anything. In Dembski's original formulation, there was no requirement that the "description" of a string be compressible. (Arrington seems even more confused, in that he talks about the "description of the string" as opposed to the string itself. Does he think the specification needs to be compressible or the string? Who can tell, with such shoddy writing?) And he makes the silliest mistake of all when he says "The shortest full description of the first string is nothing less than the string itself". That hilarious misunderstanding gives away the store: Arrington has understood nothing of what I wrote. The experiment using gzip shows that string #1, just like string #2, can indeed be compressed; it shortest description is indeed shorter than itself.

Barry says that string #2 can be compressed to "first 12 lines of Hamlet's soliloquy". But of course, this is not a compression that anyone in information theory would regard as legitimate, because it does not allow one to reconstruct the string losslessly without reference to an external source: namely, a book of Shakespeare. Real compressions do not have external referents like that (except, perhaps, to the particular computational model of compression). We talk about this basic misunderstanding in my CS 462/662 course, given each year in the Winter term. Mr. Arrington is invited to attend.

Now one could consider that "design detection" takes place in a framework of "background knowledge". Then "first 12 lines of Hamlet's soliloquy" would be a specification, not a compression. But then someone raised in Mongolia would likely not have the same background knowledge. So their measure of the "specified information" or "specified complexity" of string #2 would differ from Barry's. This shows the weakness of the "background knowledge" component of ID claims, as we pointed out: quantities in mathematics and science are not supposed to differ based on who measures them.

10. And finally, here's the funniest thing of all. There are actually at least two different legitimate ways to criticize my analysis on a technical basis. I even gave a not-so-subtle hint about one of them! Yet the Uncommon Descent folks, harnessing all the power of intellectual heavyweights like Barry Arrington and Eric Anderson and Gordon Mullings, could not manage to find them. What a surprise.

Silly Barry. Have a cookie.

Thursday, September 25, 2014

Eric Anderson is Silly, Too!


I really love Uncommon Descent!

Believe it or not, they have a whole thread devoted to how horrible I am.

Here's what silly nonentity Eric Anderson says:

As I pointed out, if Shallit’s bluff were true, he would be sitting on a Nobel Prize right now and would not be revealing the secret in some college computer science class."

My supposed "bluff" is my claim that we know what produces information. But it's not a bluff. Ask any mathematician or computer scientist if they know how to produce information in the normally-understood (Kolgomorov) sense of the word, and the answer is easy.

Randomness.

Any process generating truly random bits will generate strings with high Kolmogorov information with very very high probability.

Don't expect creationists to understand this, however.

Want to know more? Attend my CS 462/662 course starting in Winter 2015.

Barry Arrington's Silly Misunderstanding


Ever since the ID creationist blog Uncommon Descent was taken over by Barry Arrington, it's been a first-class show of the irremediable arrogance and ignorance of creationists. I don't post there because Arrington routinely bans dissenters, but I do sometimes enjoy the show.

I particularly enjoyed this post because it touches on the subject of my Winter 2015 course here at the University of Waterloo. Arrington displays two strings of symbols and says "the second string is not a group of random letters because it is highly complex and also conforms to a specification". By implication he thinks the first string is a group of random letters, or at the very least, more random than the second.

Here are the two strings in question, cut-and-pasted from Arrington's post:

#1:

OipaFJPSDIOVJN;XDLVMK:DOIFHw;ZD
VZX;Vxsd;ijdgiojadoidfaf;asdfj;asdj[ije888
Sdf;dj;Zsjvo;ai;divn;vkn;dfasdo;gfijSd;fiojsa
dfviojasdgviojao’gijSd’gvijsdsd;ja;dfksdasd
XKLZVsda2398R3495687OipaFJPSDIOVJN
;XDLVMK:DOIFHw;ZDVZX;Vxsd;ijdgiojadoi
Sdf;dj;Zsjvo;ai;divn;vkn;dfasdo;gfijSd;fiojsadfvi
ojasdgviojao’gijSd’gvijssdv.kasd994834234908u
XKLZVsda2398R34956873ACKLVJD;asdkjad
Sd;fjwepuJWEPFIhfasd;asdjf;asdfj;adfjasd;ifj
;asdjaiojaijeriJADOAJSD;FLVJASD;FJASDF;
DOAD;ADFJAdkdkas;489468503-202395ui34

#2:

To be, or not to be, that is the question—
Whether ’tis Nobler in the mind to suffer
The Slings and Arrows of outrageous Fortune,
Or to take Arms against a Sea of troubles,
And by opposing, end them? To die, to sleep—
No more; and by a sleep, to say we end
The Heart-ache, and the thousand Natural shocks
That Flesh is heir to? ‘Tis a consummation
Devoutly to be wished. To die, to sleep,
To sleep, perchance to Dream; Aye, there’s the rub,
For in that sleep of death, what dreams may come,
When we have shuffled off this mortal coil,

Needless to say, Arrington -- a CPA and lawyer who apparently has no advanced training in the mathematics involved -- doesn't specify what he means by "group of random letters". I think a reasonable interpretation would be that he is imagining that each message is generated by a stochastic process where each letter is generated independently, with uniform probability, from some finite universe of symbols.

Even with just a cursory inspection of the two strings, we see that neither one of them is likely to be "random" in this sense. We immediately see this about the second string because the set of reasonable English texts is quite small among the set of all possible strings. But we also see the same thing about the first because (for example) the trigram "asd" occurs much more often than one could reasonably expect for a random string. Looking at a keyboard, it's a reasonable interpretation that somebody, probably Arrington, dragged his hands repeatedly over the keyboard in a fashion he or she thought was "random" -- but is evidently not. (It is much harder to generate random strings than most untrained people think.)

If we want to test this in a quantitative sense, we can use a lossless compression scheme such as gzip, an implementation of Lempel-Ziv. A truly random file will not be significantly compressible, with very very high probability. So a good test of randomness is simply to attempt to compress the file and see if it is roughly the same size as the original. The larger the produced file, the more random the original string was.

Here are the results. String #1 is of length 502, using the "wc" program. (This also counts characters like the carriage returns separating the lines.) String #2 is of length 545.

Using gzip on Darwin OS on my Mac, I get the following results: string #1 compresses to a file of size 308 and string #2 compresses to a file of size 367. String #2's compressed version is bigger and therefore more random than string #1: exactly the opposite of what Arrington implied!

I suppose one could argue that the right measure of "randomness" is not the size of the compressed file, but rather the difference in size between the compressed file and the original. The smaller this difference is, the more random the original string was. So let's do that test, too. I find that for string #1, this difference is 502-308 = 194, and for string #2, this difference is 545-367 = 178. Again, for string #2 this difference is smaller and hence again string #2 is more random than string #1.

Finally, one could argue that we're comparing apples and oranges because the strings aren't the same size. Maybe we should compute the percentage of compression achieved. For string #1 this percentage is 194/502, or 38.6%. For string #2 this percentage is 178/545, or 32.7%. String #2 was compressed less in terms of percentage and hence once again is more random than string #1.

Barry's implications have failed spectacularly in every measure I tried.

Ultimately, the answer is that it is completely reasonable to believe that neither of Barry's two strings is "random" in the sense of likely to have been generated randomly and uniformly from a given universe of symbols. A truly random string would be very hard to compress. (Warning: if you try to do this with gzip make sure you use the entire alphabet of symbols available to you; gzip is quite clever if your universe is smaller.)

By the way, I should point out that Barry's "conforms to a specification" is the usual ID creationist nonsense. He doesn't even understand Dembski's criterion (not surprising, since Dembski stated it so obscurely). String #2 can be said to "conform" to many, many different specifications: English text, English text written by Shakespeare, messages of length less than 545, and so forth. But the same can be said for string #1. We addressed this in detail in our long paper published in Synthese, but it seems most ID creationists haven't read it. For one thing, it's not good enough to assert just "specification"; even by Dembski's own claims, one must determine that the specification is "independent" and one must compute the size of the space of strings that conforms to the specification. For Dembski, it's not the probability of the string being generated that is of concern; it's the relative measures of the universe of strings and the strings matching the specification that matters! Most ID creationists don't understand this basic point.

Elsewhere, Arrington says he thinks string #1 is more complex than string #2 (more precisely he says the "thesis ... that the first string is less complex than the second string ... is indefensible").

Maybe Barry said the exact opposite of what he meant; his writing is so incoherent that it wouldn't surprise me. But his statement, as given, is wrong again. For mathematicians and computer scientists, complexity of a string can be measured as the size of the optimal compressed version of that string. Again, we don't have a way to determine Kolmogorov complexity, so in practice one can use a lossless compression scheme as we did above. The larger the compressed result, the more complex the original string. And the results are clear: string #1 is, as measured by gzip, somewhat less complex than string #2.

ID creationists, as I've noted previously, usually turn the notion of Kolmogorov complexity on its head, pretending that random strings are not complex at all. We made fun of this in our proposal for "specified anti-information" in the long version of our paper refuting Dembski. Oddly enough, some ID creationists have now adopted this proposal as a serious one, although of course they don't cite us.

Finally, one unrelated point: Barry talks about his disillusion when his parents lied to him about the existence of a supernatural figure --- namely, Santa Claus. But he doesn't have enough introspection to understand that the analogy he tries to draw (with "materialist metaphysics") is completely backwards. Surely the right analogy is Santa Claus to Jesus Christ. Both are mythical figures, both are celebrated by and indoctrinated in by parents, both supposedly have supernatural powers, both are depicted as wise and good, and both are comforting to small children. The list could go on and on. How un-self-aware does one have to be to miss this?

Monday, September 22, 2014

Record Coverage Fails Again


Despite the fact that Waterloo Region is the home to many scientific and technically-minded people and businesses, the coverage of science and technology by our local paper, the Record, is truly abysmal. I've written about it before.

Here's yet another example: this article about naturopathy didn't include a single skeptical voice. Couldn't the reporter have noted, for example, that homeopathy is regarded by most medical experts as a fake and worthless therapy?