Showing posts with label information theory. Show all posts
Showing posts with label information theory. Show all posts

Sunday, July 10, 2011

More Silliness from Claire Berlinski

I spent a little more time digging into the treasure trove of dreck that is Claire Berlinski's video oeuvre.

Ms. Berlinski, it seems, was present at a by-invitation only conference in Italy entitled "Great Expectations". It's hard to find anything about this conference online because, you see, it was "secret". But it's not hard to figure out the agenda. After all, the people present seem to have been

- Paul Nelson, creationist and remarkably unproductive philosopher for whom Paul Nelson Day was named. Watch Nelson squirm, evade, and do everything possible except answer the question of how old he thinks the earth is!

- Robert Marks, intelligent design proponent and writer of some remarkably silly papers about evolutionary algorithms

- David Berlinski, father of Ms. Berlinski, author of some remarkably bad popular books about mathematics, and contributor to such eminent scientific journals as Commentary. You can see Berlinski in all his superciliousness here. (Yet more superciliousness: David Berlinski on Gödel; David Berlinski on Popper.)

Berlinski claims we should be more open intellectually and some ideas are off limits to discussion. As usual, he's wrong. We just laugh at his ideas, and those of Nelson, because they are so incoherent. Even his daughter doesn't seem to buy it!

- Moshe Averick, creationist rabbi and sucker who apparently fell hook, line, and sinker for the scam that is "specified complexity", despite it having been debunked long ago

- Stephen Meyer, creationist, philosopher, and author of a a bad book containing misunderstandings of information theory. You can see his
videos here: Part 1A, Part 1B, Part 2, Part 2B, Part 3, and Part 4. It's funny to hear Meyer claiming that he "works on the origin of life". I wonder what experiments he has done and what labs he does them in. You can also hear Meyer extolling his creationist journal, Bio-Complexity, which has thus far published a grand total of 4 articles and one "critical review" -- every single one of which has at least one author listed on the editorial team page. It's a creationist circle jerk!

Meyer is allowed to repeat his bogus claim that "Whenever we find information, and we trace it back to its source ... we always come to an intelligence, to a mind, not a material process." Ms. Berlinski doesn't question him at all on this, despite the fact that it is evidently false.

- Richard von Sternberg, professional creationist martyr and co-author with Meyer of a drecky article filled with misunderstandings and misrepresentations.

- Michael Denton, author of a wildly wrong book, filled with misunderstandings about basic biology. Video here.

- perhaps Jonathan Wells. I can't be absolutely sure, but Meyer in this interview refers to cancer, and Wells is well-known for his wacky ID cancer theory. Of course, "journalist" Berlinski doesn't ask many hard questions. In the one hard question she does ask, about what are the best arguments against ID, Meyer can't even bring himself to mention the name of the person responsible.

You can watch Ms. Berlinski's "interviews" with Marks and Averick here (at a site where you have to pay them money to leave comments). You'd think with some of Marks' work on the record as being deficient, a journalist would have some hard questions to ask. But no, a giggling Ms. Berlinski lets Marks maunder on, making bogus claims like "All biological models of evolution which have been implemented in computer code only work because the information has been front-loaded into the program and the evolutionary process in itself creates no information" without asking any tough questions at all. (Marks, by the way, seems to think that Shannon coined the word "bit", when it fact it was Tukey.)

Reading the comments at that page is a real hoot, too. We have one commenter who "grew up with Information Theory from its early days", yet makes the false claims that (1) "there is still vigorous debate about which algorithms produce a truly random number; (2) "Whether you can determine the stopping point of a Turing machine is unsettled"; (3) "Many of these problems are essentially involved with extending Godel's Theorem beyond the realm of integers"; (4) "you have to consider what in Computation Theory is termed np-complete or in Penrose's term, non-computable". He also adds, helpfully, "I hope this sheds some light". Indeed it does, but not the kind of light he thinks.

It's just so funny to hear the people in Berlinski's interviews talk about how "orthodoxy" is "stifling" discussion when at least three of the attendees are members of conservative religious denominations that claim for themselves the right to determine truth for everyone else. Project much?

One thread that runs through many of Berlinski's interviews can be summarized as follows: "Waah! We're not taken seriously!" I'm not at all impressed with this. If you want to be taken seriously, don't hold "secret" conferences and make dark implications about being suppressed. If you want to be taken seriously, do some serious science; don't post videos with fart noises making fun of court decisions you don't like. If you want to be taken seriously, respond to critics in a professional way; don't depend on igorant attack-dog lawyers as your surrogates. If you want to be taken seriously, don't use credential inflation on your supporters and denigrate the actual scientific achievements of your detractors. You want some respect? Then earn it.

Tuesday, May 24, 2011

The Information - by James Gleick

I'm currently reading The Information: A History, A Theory, A Flood by James Gleick (famed for being the author of Chaos). It's not bad at all; in fact, it's pretty good. For the moment, I'll be content to make the following observation:

My colleagues Ming Li and Bin Ma (Ming is the author of An Introduction to Kolmogorov Complexity and Its Applications) get a nice mention on page 320, as does Charlie Bennett's theory of logical depth.

Intelligent design advocates will gnash their teeth to see that their hated Richard Dawkins gets ten full pages, and his book The Selfish Gene is described as "brilliant and transformative" -- which, of course, it was.

They'll also be surprised to see that their own "Isaac Newton of information theory" doesn't get a single mention. Not a word.

This all goes to show that Gleick actually knows something about the subject and is not fooled by the bleatings of the religious.

Sunday, December 12, 2010

Jehovah's Witness Creationist Writes Me

I don't get that many creationists writing me, but I am indebted to a certain J. M. of Massachusetts, who has recently written to send me a copy of the November 2010 Jehovah's Witness publication, Awake!. He claims "This magazine points out some flaws in the atheists' reasoning."

Well, no, it doesn't.

The first article is entitled "Atheists on a Crusade", and is just one in a long line of articles by theists using religious language to denigrate atheism and evolution. "Called the new atheists," the article says, "they are not content to keep their views to themselves".

This just cracks me up. The Jehovah's Witnesses - you know, the folks who go door-to-door to spread their religion - are complaining because atheists are not content to keep their views to themselves. My irony meter just broke.

Another article, "Has Science Done Away with God?" repeats the canard that "everyday experience tells us that design -- especially highly sophisticated design -- calls for a designer". Well, no, it doesn't. That was resolved 150 years ago, when Darwin published The Origin of Species. We know now that mechanisms like mutation and natural selection can produce complexity and the appearance of design. The article goes on to ask, "What is the only source of information that we know of? In a word, intelligence". But anyone taking an introductory course in information theory at my university knows this is a lie.

It's a shame that creationists have to resort to untruths like this, but it's all they have.

Wednesday, January 13, 2010

Stephen Meyer's Bogus Information Theory

A couple of months ago, I finished a first reading of Stephen Meyer's new book, Signature in the Cell. It was very slow going because there is so much wrong with it, and I tried to take notes on everything that struck me.

Two things struck me as I read it: first, its essential dishonesty, and second, Meyer's significant misunderstandings of information theory. I'll devote a post to the book's many mispresentations another day, and concentrate on information theory today. I'm not a biologist, so I'll leave a detailed discussion of what's wrong with his biology to others.

In Signature in the Cell, Meyer talks about three different kinds of information: Shannon information, Kolmogorov information, and a third kind that has been invented by ID creationists and has no coherent definition. I'll call the third kind "creationist information".

Shannon's theory is a probabilistic theory. Shannon equated information with a reduction in uncertainty. He measured this by computing the reduction in entropy, where entropy is given by -log2 p and p is a probability. For example, if I flip two coins behind my back, you don't know how either of them turned out, so your information about the results is 0. If I now show you one coin, then I have reduced your uncertainty about the results by -log2 1/2 = 1 bit. If I show you both, I have reduced your uncertainty by -log2 1/4 = 2 bits. Shannon's theory is completely dependent on probability; without a well-defined probability distribution on the objects being discussed, one cannot compute Shannon information. If one cannot realistically estimate the probabilities, any discussion of the relevant information is likely to be bogus.

In contrast, Kolmogorov's theory of information makes no reference to probability distributions at all. It measures the information in a string relative to some universal computing model. Roughly speaking, the Kolmogorov information in (or complexity of) a string x of symbols is the length of the shortest program P and input I such that P outputs x on input I. For example, the Kolmogorov complexity of a bit string of length n that starts 01101010001..., where bit i is 1 if i is a prime and 0 otherwise, is bounded above by log2 n + C, where C is a constant that takes into account the size of the program needed to test primality.

Neither Shannon's nor Kolmogorov's theory has anything to do with meaning. For example, a message can be very meaningful to humans, and yet have little Kolmogorov information (such as the answer "yes" to a marriage proposal), and have little meaning to humans, yet have much Kolmogorov information (such as most strings obtained by 1000 flips of a fair coin).

Both Shannon's and Kolmogorov's theories are well-grounded mathematically, and there are thousands of papers explaining them and their consequences. Shannon and Kolmogorov information obey certain well-understood laws, and the proofs are not in doubt.

Creationist information, as discussed by Meyer, is an incoherent mess. One version of it has been introduced by William Dembski, and criticized in detail by Mark Perakh, Richard Wein, and many others (including me). Intelligent design creationists love to call it "specified information" or "specified complexity" and imply that it is widely accepted by the scientific community, but this is not the case. There is no paper in the scientific literature that gives a rigorous and coherent definition of creationist information; nor is it used in scientific or mathematical investigations.

Meyer doesn't define it rigorously either, but he rejects the well-established measures of Shannon and Kolmogorov, and wants to use a common-sense definition of information instead. On page 86 he approvingly quotes the following definition of information: "an arrangement or string of characters, specifically one that accomplishes a particular outcome or performs a communication function". For Meyer, a string of symbols contains creationist information only if it communicates or carries out some function. However, he doesn't say explicitly how much creationist information such a string has. Sometimes he seems to suggest the amount of creationist information is the length of the string, and sometime he suggests it is the negative logarithm of the probability. But probability with respect to what? Its causal history, or with respect to a uniform distribution of strings? Dembski's definition has the same flaws, but Meyer's vague definition introduces even more problems. Here are just a few.

Problem 1: there is no univeral way to communicate, so Meyer's definition is completely subjective. If I receive a string of symbols that says "Uazekele?", I might be tempted to ignore it as gibberish, but a Lingala speaker would recognize it immediately and reply "Mbote". Quantities in mathematics and science are not supposed to depend on who is measuring them.

Problem 2: If we measure creationist information solely by the length of the string, then we can wildly overestimate the information contained in a string by padding. For example, consider a computer program P that carries out some function, and the identical program P', except n no-op instructions have been added. If he uses the length measure, then Meyer would have to claim that P' has something like n more bits of creationist information than P. (In the Kolmogorov theory, by contrast, P' would have only at most order log n more bits of information.)

Problem 3: If we measure creationist information with respect to the uniform distribution on strings, then Meyer's claim (see below) that only intelligence can create creationist information is incorrect. For example, any transformation that maps a string to the same string duplicated 1000 times creates a string that, with respect to the uniform distribution, is wildly improbable; yet it can easily be produced mechanically.

Problem 4: If we measure creationist information with respect to the causal history of the object in question, then we are forced to estimate these probabilities. But since Meyer is interested in applying his method to phenomena that are currently poorly understood, such as the origin of life, all he's really doing (since his creationist information is sometimes the negative log of the probability) is estimating the probability of these events -- something we can't reasonably do, precisely because we don't know that causal history. In this case, all the talk about "information" is a red herring; he might as well say "Improbable - therefore designed!" and be done with it.

Problem 5: All Meyer seems interested in is whether the string communicates something or has a function. But some strings communicate more than others, despite being the same length, and some functions are more useful than others. Meyer's measure doesn't take this into account. A string like "It will rain tomorrow" and "Tomorrow: 2.5 cm rain" have the same length, but clearly one is more useful than the other. Meyer, it seems to me, would claim they have the same amount of creationist information.

Problem 6: For Meyer, information in a computational context could refer to, for example, a computer program that carries out a function. The longer the program, the more creationist information. Now consider a very long program
that has a one-letter syntax error, so that the program will not compile. Such a program does not carry out any function, so for Meyer it has no information at all! Now a single "point mutation" will magically create lots more creationist information, something Meyer says is impossible.

Even if we accept Meyer's informal definition of information with all its flaws, his claims about information are simply wrong. For example, he repeats the following bogus claim over and over:

p. 16: "What humans recognize as information certainly originates from thought - from conscious or intelligent human activity... Our experience of the world shows that what we recognize as information invariably reflects the prior activity of conscious and intelligent persons."

p. 291: "Either way, information in a computational context does not magically arise without the assistance of the computer scientist."

p. 341: "It follows that mind -- conscious, rational intelligent agency -- what philosophers call "agent causation," now stands as the only cause known to be capable of generating large amounts of specified information starting from a nonliving state."

p. 343: "Experience shows that large amounts of specified complexity or information (especially codes and languages) invariably originate from an intelligent source -- from a mind or personal agent."

p. 343: "...both common experience and experimental evidence affirms intelligent design as a necessary condition (and cause)
of information..."

p. 376: "We are not ignorant of how information arises. We know from experience that conscious intelligent agents can create informational sequences and systems."

p. 376: "Experience teaches that whenever large amounts of specified complexity or information are present in an artifact or entity whose causal story is known, invariably creative intelligence -- intelligent design -- played a role in the origin of that entity."

p. 396: "As noted previously, as I present the evidence for intelligent design, critics do not typically try to dispute my specific empirical claims. They do not dispute that DNA contains specified information, or that this type of information always comes from a mind..."

I have a simple counterexample to all these claims: weather prediction. Meteorologists collect huge amounts of data from the natural world: temperature, pressure, wind speed, wind direction, etc., and process this data to produce accurate weather forecasts. So the information they collect is "specified" (in that it tells us whether to bring an umbrella in the morning), and clearly hundreds, if not thousands, of these bits of information are needed to make an accurate prediction. But these bits of information do not come from a mind - unless Meyer wants to claim that some intelligent being (let's say Zeus) is controlling the weather. Perhaps intelligent design creationism is just Greek polytheism in disguise!

Claims about information are central to Meyer's book, but, as we have seen, many of these claims are flawed. There are lots and lots of other problems with Meyer's book. Here are just a few; I could have listed dozens more.

p. 66 "If the capacity for building these structures and traits was something like a signal, then a molecule that simply repeated the same signal (e.g., ATCG) over and over again could not get the job done. At best, such a molecule could produce only one trait."

That's not clear at all. The number of repetitions also constitutes information, and indeed, we routinely find that different numbers of repetitions result in different functions. For example, Huntington's disease has been linked to different numbers of repetitions of CAG.

p. 91: "For this reason, information scientists often say that Shannon's theory measures the "information-carrying capacity," as opposed to the functionally specified information or "information content," of a sequence of characters or symbols.

Meyer seems quite confused here. The term "information-carrying capacity" in Shannon's theory refers to a channel, not a sequence of characters or symbols. Information scientists don't talk about "functionally specified information" at all, and they don't equate it with "information content".

p. 106: (he contrasts two different telephone numbers, one randomly chosen, and one that reaches someone) "Thus, Smith's number contains specified information or functional information, whereas Jones's does not; Smith's number has information content, whereas Jones' number has only information-carrying capacity (or Shannon information)."

This is pure gibberish. Information scientists do not speak about "specified information" or "functional information", and as I have pointed out, "information-carrying capacity" refers to a channel, not a string of digits.

p. 106: "The opposite of a complex sequence is a highly ordered sequence like ABCABCABCABC, in which the characters or constituents repeat over and over due to some underlying rule, algorithm, or general law."

This is a common misconception about complexity. While it is true that in a string with low Kolmogorov complexity, there is an underlying rule behind it, it is not true that the "characters or constituents" must "repeat over and over". For example, the string of length n giving a 1 or 0 depending on whether i is a prime number (for i from 1 to n) has low Kolmogorov complexity, but does not "repeat over and over".

p. 201 "Building a living cell not only requires specified information; it requires a vast amount of it -- and the probability of this amount of specified information arising by chance is "vanishingly small."

Pure assertion. "Specified information" is not rigorously defined. How much specified information is there in a tornado? A rock? The arrangement of the planets?

p. 258 "If a process is orderly enough to be described by a law, it does not, by definition, produce events complex enough to convey information."

False. We speak all the time about statistical laws, such as the "law of large numbers". Processes with a random component, such as mutation+selection, can indeed generate complex outcomes and information.

p. 293: "Here's my version of the law of conservation of information: "In a nonbiological context, the amount of specified information initially present in a system Si, will generally equal or exceed the specified information content of the final system, Sf." This rule admits only two exceptions. First, the information content of the final state may exceed that of the initial state, Si, if intelligent agents have elected to actualize certain potential states while excluding others, thus increasing the specified information content of the system. Second, the information content of the final system may exceed that of the initial system if random processes, have, by chance, increased the specified information content of the system. In this latter case, the potential increase in the information content of the system is limited by the
"probabilistic resources" available to the system."


Utterly laughable. The weasel word "generally" means that he can dismiss exceptions when they are presented. And what does "in a nonbiological context" mean? How does biology magically manage to violate this "law"? If people are intelligent agents, they are also assemblages of matter and energy. How do they magically manage to increase information?

p. 337 "Neither computers by themselves nor the processes of selection and mutation that computer algorithms simulate can produce large amounts of novel information, at least not unless a large initial complement of information is provided."

Pure assertion. "Novel information" is not defined. Meyer completely ignores the large research area of artificial life, which routinely accomplishes what he claim is impossible. The names John Koza, Thomas Ray, Karl Sims, and the term "artificial life" appear nowhere in the book's index.

p. 357: "Dembski devised a test to distinguish between these two types of patterns. If observers can recognize, construct, identify, or describe apttern without observing the event that exemplifies it, then the pattern qualifies as independent from the event. If, however, the observer cannot recognize (or has no knowledge of) the pattern apart from observing the event, then the event does not qualify as independent."

And Dembski's claim to have given a meaningful definition of "independence" is false, as shown in detail in my paper with Elsberry -- not referenced by Meyer.

p. 396: "As noted previously, as I present the evidence for intelligent design, critics do not typically try to dispute my specific empirical claims. They do not dispute that DNA contains specified information, or that this type of information always comes from a mind..."

Critics know that "specified information" is a charade, a term chosen to sound important, with no rigorous coherent definition or agreed-upon way to measure it. Critics know that information routinely comes from other sources, such as random processes. Mutation and selection do just fine.

In summary, Meyer's claims about information are incoherent in places and wildly wrong in others. The people who have endorsed this book, from Thomas Nagel to Philip Skell to J. Scott Turner, uncritically accepting Meyer's claims about information and not even hinting that he might be wrong, should be ashamed.

Saturday, November 28, 2009

The Ol' Information Bait-and-Switch

It seems that my criticism of aging philosopher Thomas Nagel has got the folks at Uncommon Descent running scared. That's because they know their bogus claims about information are being exposed.

I gave an example that trivially refutes Stephen Meyer's claim that "information always comes from a mind": weather prediction. Meteorologists record information such as wind speed, wind direction, and temperature to make their predictions. Under both the informal definition of information used in everyday life, and the formal technical definitions of "information" universally accepted by mathematicians and computer scientists, these quantities indeed represent "information". What is the response?

Of course, it's the old information bait-and-switch trick: Dembski is now claiming that my example was "unspecified information", whereas Meyer was talking about "specified information".

Dembski is an old hand at the information bait-and-switch game, as Elsberry and I showed in detail in our peer-reviewed article. He moves from one definition to another seamlessly, as it suits him, for whatever argument is at hand. This is most apparent in his estimation of probabilities, where he switches back and forth between the uniform probability interpretation and the causal-history interpretation, depending on which one gives the answer he requires. We discuss this at length in our article.

Furthermore, the notion of "specification" comes from Dembski himself, and as Elsberry and I showed, it is completely incoherent. Nobody can say whether a given string is "specified" or not, and "specification" fails to have the properties Dembski claims it has. No mathematician or computer scientist, other than Dembski and his intelligent design friends, uses Dembski's measure or does any calculations with it. To pretend that it is meaningful is not honest.

Just to give one example, here is Dembski and his deep technical and mathematical "proof" that "the [sic] bacterial flagellum" is specified:

"At any rate, no biologist I know questions whether the functional systems that arise in biology are specified." (No Free Lunch)

So, a challenge: which, if any of the following strings constitute "specified information"? Be sure, in your answer, to give all the things that Dembski says are required before one can be sure: the space of events, the rejection function, the rejection region, the "independently-given" specification, the relevant background knowledge, the independence calculation, and so forth.

1. VUIAPIDESFFGWNHCOIDTGLTJCITMTRITIEIISPOFKAAMORSFEOSDSCDNNRHTEHETCOSOUNETNGQBJINB
2. INFORMATIONCSFVICJUWOEFNLMICPTHOPIISDSTNFJABGEODTQIITUNDHGASTRDNEIKTGSBTOHEERCSE
3. UOTDCTTADWDHINEEFVJETIIICIRPAQDCFLNTNOGROFSGOEFRNSSKTIOPTJMBNMSSUNIHOCEGTAEHISIB

Meanwhile, Dembski needs to inform his acolyte "Joe G", who thinks that the proper definition of "information" is "the attribute inherent in and communicated by one of two or more alternative sequences or arrangements of something (as nucleotides in DNA or binary digits in a computer program) that produce specific effects". Well, by that definition, my example of the information used in weather prediction is indeed information -- there are many alternatives in wind speed, direction, and temperature, and no one can doubt that different arrangements of these quantities produce different effects -- namely, different weather.

Joe G, get with the program! Just say my example is "unspecified", and be done with it! No need to trouble yourself with actual thinking.

Sunday, October 04, 2009

Jonathan Wells: Another ID Creationist Who Doesn't Understand Information Theory

Intelligent design creationists love to talk about information theory, but unfortunately they rarely understand it. Jonathan Wells is the latest ID creationist to demonstrate this.

In a recent post at "Evolution News & Views" describing an event at the University of Oklahoma, Wells said, "I replied that duplicating a gene doesn’t increase information content any more than photocopying a paper increases its information content."

Wells is wrong. I frequently give this as an exercise in my classes at the University of Waterloo: Prove that if x is a string of symbols, then the Kolmogorov information in xx is greater than that in x for infinitely many strings x. Most of my students can do this one, but it looks like information expert Jonathan Wells can't.

Like many incompetent people, Wells is blissfully unaware of his incompetence. He closes by saying, "Despite all their taxpayer-funded professors and museum exhibits, despite all their threats to dismantle us and expose us as retards, the Darwinists lost."

We don't have to "expose" the intelligent design creationists as buffoons; they do it themselves whenever they open their mouths.

Saturday, January 03, 2009

Test Your Knowledge of Information Theory

Creationists think information theory poses a serious challenge to modern evolutionary biology -- but that only goes to show that creationists are as ignorant of information theory as they are of biology.

Whenever a creationist brings up this argument, insist that they answer the following five questions. All five questions are based on the Kolmogorov interpretation of information theory. I like this version of information theory because (a) it does not depend on any hypothesized probability distribution (a frequent refuge of scoundrels) (b) the answers about how information can change when a string is changed are unambiguous and agreed upon by all mathematicians, allowing less wiggle room to weasel out of the inevitable conclusions, and (c) it applies to discrete strings of symbols and hence corresponds well with DNA.

All five questions are completely elementary, and I ask these questions in an introduction to the theory of Kolmogorov information for undergraduates at Waterloo. My undergraduates can nearly always answer these questions correctly, but creationists usually cannot.

Q1: Can information be created by gene duplication or polyploidy? More specifically, if x is a string of symbols, is it possible for xx to contain more information than x?

Q2: Can information be created by point mutations? More specifically, if xay is a string of symbols, is it possible that xby contains significantly more information? Here a, b are distinct symbols, and x, y are strings.

Q3: Can information be created by deletion? More specifically, if xyz is a string of symbols, is it possible that xz contains signficantly more information?

Q4: Can information be created by random rearrangement? More specifically, if x is a string of symbols, is it possible that some permutation of x contains significantly more information?

Q5. Can information be created by recombination? More specifically, let x and y be strings of the same length, and let s(x, y) be any single string obtained by "shuffling" x and y together. Here I do not mean what is sometimes called "perfect shuffle", but rather a possibly imperfect shuffle where x and y both appear left-to-right in s(x, y) , but not necessarily contiguously. For example, a perfect shuffle of 0000 and 1111 gives 01010101, and one possible non-perfect shuffle of 0000 and 1111 is 01101100. Can an imperfect shuffle of two strings have more information than the sum of the information in each string?

The answer to each question is "yes". In fact, for questions Q2-Q5, I can even prove that the given transformation can arbitrarily increase the amount of information in the string, in the sense that there exist strings for which the given transformation increases the complexity by an arbitrarily large multiplicative factor. I won't give the proofs here, because that's part of the challenge: ask your creationist to provide a proof for each of Q1-Q5.

Now I asserted that creationists usually cannot answer these questions correctly, and here is some proof.

Q1. In his book No Free Lunch, William Dembski claimed (p. 129) that "there is no more information in two copies of Shakespeare's Hamlet than in a single copy. This is of course patently obvious, and any formal account of information had better agree." Too bad for him that Kolmogorov complexity is a formal account of information theory, and it does not agree.

Q2. Lee Spetner and the odious Ken Ham are fond of claiming that mutations cannot increase information. And this creationist web page flatly claims that "No mutation has yet been found that increased the genetic information." All of them are wrong in the Kolmogorov model of information.

Q4. R. L. Wysong, in his book The Creation-Evolution Controversy, claimed (p. 109) that "random rearrangements in DNA would result in loss of DNA information". Wrong in the Kolmogorov model.

So, the next time you hear these bogus claims, point them to my challenge, and let the weaselling begin!