Quotes and page references are from Dr. Lee M. Spetner: Not by chance! - Shattering the Modern Theory of Evolution, The Judaica Press, Inc., 1998.
All ends well that begins well, and the very first words in the Preface of the book are (cf. p. vii):
Conventional wisdom holds that life arose spontaneously. In the remote past a simple living organism is supposed to have formed by chance out of inert matter. That organism is then supposed to have reproduced and developed into the life of today through random variation shaped by natural selection.
Oh well, yet another strawman on its way to the stakes, it would seem.
According to Spetner, a few bologists have pointed out that discoveries in biology together with elementary principles of information theory have made this view untenable. His key point is that "[t]he information required for large-scale evolution cannot come from random variations." (ibid.) And for Spetner it is not only evolutionary theory that will fall down, it's the whole modern show. On p. viii, he writes:
Why is randomness important? It is important because it has had a profound influence on the shaping of the Weltanschauung of Western Society. It has led to atheism and to the belief that we human beings are no more than a cosmic accident. This belief serves as a basis for the social values and morals held by Western intellectuals, and for their attitudes toward religion. If the belief is unfounded, then the resulting world view and its implications must be reexamined.
So, everything bad can be attributed to that belief in randomnesss, and Spetner is going to show us that randomness just doesn't cut it, and we'll all realize our delusions and go back to church, right?
Not that Spetner is against evolution as such, but he claims that evolution proceeds through a combination of cues from the environment with information already in the genes; that is, nothing new ever comes around, it is only a question of, which information is used when (cf. p. xi). And if random variation cannot be the mechanism of evolution, we must search for some nonrandom mechanism. A mechanism that could not itself have evolved, so how did it ever arise? A question, Spetner claims, science may never be able to answer – however, he asks, is Creation an option? (ibid.).
The Preface ends with the words (p. xii):
I hope you will read the book with an open and inquisitive mind, that you will follow my arguments and finally agree with my conclusion. I hope you enjoy reading it.
Well, as should be clear, I may be somewhat prejudiced against this book, and I may even have decided beforehand not to enjoy it; but I will try as well as I can to follow Spetner's arguments and check how well they work. I think that's as much as I can promise.
Chapter 1, "Historical Background", is actually well-written and gives, as its name suggests, the historical background from the 18th century through Darwin's Origin of Species to the Modern Synthetic Theory of Evolution (cf. pp. 20-21), which combined Darwinian evolution with Mendelian inheritance. While Darwin had rejected randomness as the source of variation, attributing it instead to the influence of the environment and use and disuse of organs, the neo-Darwinians rejected environmental influence on variation, which they attributed to random, genetic variations, mutations. As Spetner mentions, after the rediscovery of Mendel's results around 1900, genetic mutations had been observed, but being generally harmful to organisms, they had been rejected as the source of evolutionary variation, and Darwin's theory of evolution was actually in deep trouble in the early 20th century.
On pp. 21-22, Spetner sums this development up:
The neo-darwinians then built their theory on random variation culled and directed by natural selection. They identified the heritable variations needed by the theory with the mutations discovered and named by De Vries some forty years earlier. A decade later, Watson and Crick identified the heritable variation with the random errors in DNA replication.
So far, so good, and would even be recommended reading for creationists.
It may be worth noting that in Darwin's time, the ideal in natural science was deterministic laws; but increasingly during the last half of the 19th century, statistics increased in importance, even within natural sciences. The early 19th century even introduced quantum mechanics and truly random events, so the neo-Darwinists were in many ways dependent on the acceptance of randomness within the scientific community.
DNA and mutations as random errors in DNA replication is the cue for Chapter 2, "Information and Life", which presents an introduction to DNA.
On pp. 28-29, Spetner introduces the four DNA bases adenine, i>guanine,
thymine, and cytosine, and on p. 29 he writes:
There is no chemical restriction on the order of the bases along a strand of DNA; the order can be anything at all. The order of the bases is then free to carry information.
And, according to Spetner, this information is written with the four DNA bases as an alphabet. While what Spetner writes is quite fine, there are a few details of importance left out. The bases are actually read in triplets called codons, which Spetner mentions later (p. 44), and since there are 64 different possible codons, the actual alphabet has 64 letters. The bases in a codon are not like Scrabble blocks with letters painted on them, but otherwise identical. Chemically the bases can be divided into two groups with two bases each: pyrimidines = cytosine and thymine, and purines = adenine and guanine. A codon encodes either an amino acid or functions as a stop code, and since there are only 20 amino acids, some codons have the same translation, and the translation is mostly dependent on the sequence of pyrimidines and purines in the codon, not actually going down to the level of the bases.
For Spetner, the sequence of bases makes up a message, and bottom p. 29 he writes:
How did that message get written in the first place? The standard answer of the biologist is that the message got written by itself, through evolution, and that evolution works the way the neo-Darwinian theory says it does. But I shall show that evolution cannot work that way.
The meaning here being that as DNA is copied, some copy errors may occur and lead to mutations. Spetner's claim is that information theory shows that no new information can be gained this way.
On pp. 31-34, Spetner mentions proteins, including enzymes, that are made up of the amino acids encoded in the DNA, ending with the paragraph:
The information in the genome [= the DNA in the cell] tells the cell what kind of proteins to make. Because proteins play a dominant role in cell function, they play a dominant role in the whole organism. The information in the genome, by controlling the making of protein, fixes the form and function of the entire organism.
So, the organism is basically defined by its DNA.
In Chapter 3, "The Neo-Darwinian Theory of Evolution", Spetner presents the NDT, and on pp. 68-69 also the theory of punctuated equilibria.
On p. 71, Spetner writes:
According to the NDT, information can be added only through selection. Selection tests if the mutation is positive or negative, preserving it if positive and destroying it if negative. Even the most complicated mutation serves only as grist for the mill of selection.
And a paragraph later:
According to the NDT, the receiver of the information is the genome — not the genome of any one individual, but the average genome of the population. That's where the message is ultimately supposed to be received, and that's where the information is supposed to build up.
What Spetner refers to by "the average genome of the population" would be the gene pool of the population, not actually the avarage genome.
Continuing, Spetner writes:
When a mutation occurs, selection can choose only between the mutant and the rest of the population. It can choose the better from the good, the more adaptive from the less adaptive. In one step, selection can add no more than one bit of information. That's because it makes only a binary choice between yes and no, no matter how complex the two options.
Well, not exactly. Selection does not work in this binary way. The organism with the mutation might reproduce better than any other; but the other organisms don't stop reproducing completely based only on that.
On p. 73, Spetner writes:
If a copying error were to damage a gene so it no longer functioned, the genome will have become less complex, and some of its information will have been lost. You might think the mutation has wiped out all the information in that gene. You might think that after the mutation, the genetic information is as if the gene weren't there. But the damaged gene is still there and nearly intact. The only defect is the one mutated nucleotide.
The meaning here being that either a gene functions or it does not function. In actual practice it is rarely like that; but that is of less importance here. Spetner's point is that just as one tiny mutation can switch off an entire gene, a tiny mutation can switch on an entire gene; but the gene had to be there already. That is, according to Spetner, mutations do not make genes; they only turn them on or off. This indeed is 1 bit of information for a specific genome; but when we talk about evolution, we deal in populations, not in individual organisms.
As in the last quote from p. 73, Spetner is rather equivocative about 'information'. This tendency gets worse in Chapter 4, "Is the Deck Stacked?".
On p. 96, Spetner defines two criteria for cumulative selection. The definitions are reformulated on p. 106:
[The evolutionary steps] must be able to be part of a long series in which the mutation in each step is adaptive.
The mutattions must, on the average, add a little information to the genome.
But what is "a little information"?
Continuing, Spetner writes:
The information a mutation adds in a typical step of cumulative selection must fall within strict limits. On the average. each step must add some information. Yet it cannot be much more than one bit. Each step must add some information on the average so that information can build up over the full chain of steps that make up macroevolution. But if a mutation seems to have much more than one bit it can't be a part of cumulative selection. We would have to interpret that mutation as the switching on of information already in the genome, as I noted in Chapter 3.
This doesn't make it any clearer, does it? We really need to know, how Spetner measures information in a genome.
On pp. 108-109, Spetner compares cumulative evolution to finding a path through a huge tree-like maze starting at the root. Spetner assumes that it takes 500 evolutionary steps to turn one species into another, so there are 500 levels of nodes in the tree, and with a genome of one million nucleotides, there are one million branches from each node.
First, at any time, there will be more than one mutation in the population gene pool at a time, and second, one million choices corresponds to 20 bits of information, if we interpret information as the bits to indicate the choices not taken, so each step would add 20 bits of information, not a maximum of 1 bit.
The title of Chapter 5 is "Can Random Variation Build Information?", and that at least sounds promising. Hopefully, Spetner is going in this chapter to tell us more about, how he measures information.
And we are in luck here. On p. 134, Spetner writes:
Before we enter the subject of mutations and information, let's first see how information is related to specificity. The more specific a gene, the more information it contains. In general, the more specific any message, the more information it contains. The information in a gene is the same as the information in the protein it encodes.
Or, the more possibilities ruled out, the more information.
Spetner's subsequent discussion has a few oddities to it. For example he discusses on pp. 139-142 bacterial resistance to antibiotics such as streptomycin, a drug that attaches to a matching site on a ribosome and interferes with the bacterium's production of proteins. A mutation can change the site so that streptomycin can no longer attach to the site. Bottom p. 141, Spetner sums up:
Although such a mutation can have selective value, it decreases rather than increases the genetic information. It therefore cannot be typical of mutations that are supposed to help form small steps that make up macroevolution. Those steps must, on the average, add information. Even though resistance is gained, it's gained not by adding something, but by losing it. Rather than say that the bacterium gained resistance to the antibiotic, we would be more correct to say that it lost its sensitivity to it. It lost information.
Maybe so, if it wasn't because Spetner is contradicting himself here. On p. 137, he introduces a word-enzyme, that matches to all words that contain a certain string. As an example, Spetner mentions the string "ghtsha", and claims that there is only one english word with that as a substring, the word nightshade. That is, such a word-enzyme would have a high specificity. Spetner then erites (cf. p. 137):
If we reduce the information in the match string by dropping the "a" at the end, and match only to "ghtsh," the match becomes less specific since there are now two more words that match, namely, nightshirt and lightship. These matches, however, would be "weaker" than the previous one because they match to only five letters, whereas the previous match was to six.
Let's reverse this. Assume our word-enzyme starts out with looking for the string "ghtsh" and then changes to "ghtsha", then there would have been an increase in information due to the extra letter and an increase in specificity, because fewer strings match. However, with the logic from the example with resistance to streptomycin, we might as well say that the word-enzyme had lost its sensitivity to nightshirt and lightship and therefore lost information.
As Spetner has it, a loss and a gain in information is only a question of how we look at it. His concluding remarks on p. 143 are therefore somewhat on the comical side:
The NDT is supposed to explain how the information of life has been built up by evolution. The essential biological difference between a human and a bacterium is in the information they contain. All other biological differences follow from that. The human genome has much more information than does the bacterial genome. Information cannot be built up by mutations that lose it.
But since the example of a mutation mentioned by Spetner could be interpreted as an increase in information rather than as a decrease in information, there's really no reason to panic.
Ian Musgrave in the TalkOrigins article Information Theory and Creationism: Spetner and Biological Information goes into much more detail about this.
Chapter 6, "The Watchmaker's Blindness", is a critique of Richard Dawkins' evolution simulations in the book The Blind Watchmaker. There is little to comment on here – Dawkins' point was to illustrate the difference between no selection and selection, and his examples did that quite well. The mutation rates in the examples are unrealistic high, but in return the population sizes are unrealistic small, so Spetner's critique doesn't quite hit.
In Chapter 7, "The Deck is Stacked!", Spetner claims that mutations can be induced by the environment and thus are not 'random' as required by the NDT.
Ian Musgarve also comments on this subject in the above linked TalkOrigins article in the section Spetner and "directed" mutation.
Spetner mentions pp. 187-191 experiments by Barry Hall and John Cairns, where mutations occur that enable bacteria to feed on nutrients they usually cannot feed on. And these experiments indicate that it is the presence of the nutrient that induces the mutations as if the bacteria could choose which genes to enable depending on available nutrients.
On p. 190, Spetner writes:
If the results of these experiments indicate that adaptive mutations are stimulated by the environment, they contradict the basic dogma of neo-Darwinism. According to that dogma, mutations are random, and the kind of mutations that occur are independent of the environment. If mutations are really nonrandom in the sense that the environment can stimulate adaptive mutations, then the paradigm of Darwinian evolution, which has dominated the biological sciences for close to 150 years, must be replaced.
How can something that might disprove a statement about mutations force a replacement of the paradigm of Darwinian evolution, when Darwin knew nothing about mutations? It would be the same as disproving the existence of, say, magnetism by disproving a particular theory about magnetism.
Spetner provides us with an answer in a passage in the last paragraph on p. 191:
Resistance to the nonrandom-variation interpretation stems from a refusal to abandon the Darwinian agenda that evolution must confirm that life arose and developed spontaneously. With that agenda, nonrandom adaptive variation, arising from an environmental signal turning on an already present set of genes, is hard to account for.
The reasoning here is that organisms can only turn genes on or off, if those genes already exist, and if all that mutations can do is to turn genes on or off, mutations cannot explain evolution of the genes in the first round.
In general, this chapter presents a neo-Lamarckian concept of evolution, where new traits are acquired dependent on the environment, inclusive the kind of food eaten, and these new traits are inherited. It is unclear to me, if Spetner's examples are not just, what is called forms of a species, environmentally dependent variations that are not actually new traits.
In Chapter 8, the Epilogue, Spetner sums up his critique of the NDT and proposes a theory called the nonrandom evolutionary hypothesis, NREH, and he concludes with the words (cf. p. 208):
The NREH is a hypothesis that explains many observed phenomena that the NDT does not explain. According to the NREH, adaptive modifications in organisms occur when the environment induces a change in either the phenotype or the genotype. It can account for the environmentally induced adaptive mutations reported in bacteria. It can account for the pervasive convergences found throughout the plant and animal kingdoms. The NREH does not suffer from the contradictions of the NDT, and promises therefore to provide a more consistent picture of life.
And it's even consistent with the Torah.
For the layman, this may all appear very persuasive, and it is beyond my qualiications to give much of an in depth critique. However, even I can see some obvious problems with the NREH as presented by Spetner, such as his inconsistent information metric and what he considers to be different species may simply be different forms of the same species.
Carl Wieland of AnswersInGenesis has written a review of Not By Chance! (first published in Creation 20(1): pp. 50–51, December 1997), which begins with the words:
‘See, speak, or hear no evidence against evolution’ seems to be the golden rule in the academic world—so it will be interesting to see the response to this devastating assault, by a highly qualified author, on the very core of evolutionary theory.
As should be clear from the above, Spetner's 'assault on the very core of evolutionary theory' can hardly be considered devastating, simply because chance is not the very core of evolutionary theory.
Later, Wieland writes:
In a memorable turn of phrase, he says that anyone who thinks that an accumulation of mutations (information-losing processes) can lead to macroevolution (a massive net gain of information) ‘is like the merchant who lost a little money on every sale but thought he could make it up on volume.’
After such a ‘king hit’, Dawkins’ computer simulations of ‘insects’ and literary weasels seem somewhat puerile, and are easily dealt with by the author, who, from reliable information received, is rather keen to debate Dawkins on this whole issue. Why not, when one appears to be equipped with such decisive scientific weapons?
Except that Spetner is somewhat equivocative about that information thing, so let us try to go more into details about Spetner's method of measuring information.
On TalkOrigins, Edward E. Max has posted an article The Evolution of Improved Fitness by Random Mutation Plus Selection, and subsequently a longer exchange between Max and Spetner followed. This exchange is summed up by Spetner in the TrueOrigins article A Scientific Defense of a Creationist Position on Evolution.
In this article, Spetner writes:
The information content of the genome is difficult to evaluate with any precision. Fortunately, for my purposes, I need only consider the change in the information in an enzyme caused by a mutation. The information content of an enzyme is the sum of many parts, among which are:
- Level of catalytic activity
- Specificity with respect to the substrate
- Strength of binding to cell structure
- Specificity of binding to cell structure
- Specificity of the amino-acid sequence devoted to specifying the enzyme for degradation
These are all difficult to evaluate, but the easiest to get a handle on is the information in the substrate specificity.
This is a more elaborated version of, what Spetner wrote in Chapter 5 of Not By Chance! Information rules out possibilities, so the more specificity we have, the more information we have.
Spetner continues:
To estimate the information in an enzyme I shall assume that the information content of the enzyme itself is at least the maximum information gained in transforming the substrate distribution into the product distribution. (I think this assumption is reasonable, but to be rigorous it should really be proved.)
Who is gaining information here? Are we supposed to think that an enzyme gains information by "transforming the substrate distribution into the product distribution"? What Spetner is refering to is that we can make an experiment involving various substrates and observe the result, if any, of enzymatic activity. So, the one gaining information is the experimenter, not the enzyme. That is, the information content is not something within the enzyme, but something that is generated by the experiment.
But Spetner sees it differently:
We can think of the substrate specificity of the enzyme as a kind of filter. The entropy of the ensemble of substances separated after filtration is less than the entropy of the original ensemble of the mixture. We can therefore say that the filtration process results in an information gain equal to the decrease in entropy. Let’s imagine a uniform distribution of substrates presented to many copies of an enzyme. I choose a uniform distribution of substrates because that will permit the enzyme to express its maximum information gain. …
The products of a substrate on which the enzyme has a higher activity will be more numerous than those of a substrate on which the enzyme has a lower activity. Because of the filtering, the distribution of concentrations of products will have a lower entropy than that of substrates.
That is, for Spetner it is the enzyme that gains information. Are we to think that the enzyme takes a nip of each substrate to figure out which ones are tasty and which ones are not? And even if so, where is the memory of the enzyme, in which it stores the gained information?
The entropy, Spetner refers to, is Shannon entropy (almost defined by Spetner a few paragraphs later), which indeed assumes its maximum for a uniform distribution. However, Shannon's model was a communication system, where the sender can choose among a number of messages to send, where all the messages are possible, but not necessarily equally likely. The entropy measures the receiver's uncertainty about which message was sent, an uncertainty that assumes its maximum, if all messages are equally probable. In Spetner's case the entropy would be the experimenter's uncertainty about, which substrates the enzyme will catalyze. However, that is not related to the substrate distribution, but to the experimenter's prior assumption about, which substrates the enzyme will catalyze. Using equal amounts of the substrates does make calculations easier, but it has nothing to do with entropy.
The word 'ensemble' used by Spetner refers to, what is called a statistical ensemble, the set of possible states of some system, often subject to some conditions. A simple example would be the entire set of sequences of 100 tossings of a coin. Each specific sequence is a member of the ensemble. Tossing a coin 100 times would return a sample from this ensemble; tossing the coin 100 times again would return another sample. If we tossed the coin 1,000 times, there would be be 901 positions, where a sub-sequence of 100 tosses begins, and each such sub-sequence would be a sample from the ensemble of possible sequences of 100 coin tosses. That is, an ensemble only exists conceptually, whereas a sample has real existence. Therefore Spetner's use of the expressions "[t]he entropy of the ensemble of substances separated after filtration" and "the entropy of the original ensemble" is meaningless. What might be meaningfull is to consider the ensemble of samples from these collections, say samples of X ml each.
However, we'll leave this subject and return to Spetner, who defines entropy like this:
The entropy of an ensemble of n elements with fractional concentrations f1,…,fn is given by
and if the base of the logarithm is 2, the units of entropy are bits.
Well, not quite, unless Spetner actually wants to define entropy in that way. What is missing is a minus-sign in front of the ∑-symbol.
However, we'll leave that subject as well and return to Spetner, who first illustrates the formula by assuming that the enzyme is active on only one of the substrates, which he describes as perfect filtering; that is f1,…,fn = 1/n. The input entropy in this case is given by Spetner as HI = log n, which is quite correct, though not according to his formula (1); which would give HI = n•(1/n log 1/n) = log 1/n = − log n. The output entropy is given by Spetner as HO = 0. The decrease in entropy is therefore H = HI − HO = log n, which equals the gain in information.
Spetner next considers the opposite extreme: an enzyme which does not discriminate between the substrates, it leaves products from all substrates. Here the input entropy and the output entropy are the same, HI = HO = log n, and the difference, the decrease in entropy is H = 0.
The problem with all this, even if using the correct formula for Shannon entropy, is that it doesn't make any sense. The number n is chosen by the experimenter, so how can it be encoded in the enzyme? Assume the enzyme does not catalyze any of the substrates, then the output entropy is HO = n•(0 log 0) = 0, just as in the case where the enzyme catalyzes exactly one substrate. How meaningfull is it to say that an enzyme that can catalyze no substrate has the same information encoded as one that can catalyze exactly one substrate? And not only that; while it would be possible to have a collection of substrates, none of which a given enzym can catalyze, the sum of the fi's must be 1. That is, the experimenter must make sure that the enzyme is able to catalyze at least one of the substrates. How can that have been encoded in the enzyme?
Assume that we are to guess a number between 1 and n, and that we have no reason not to assume each number to be equally probable. We can guess at random, or more systematically start with 1 and increase our guess by 1 until we hit. The minimum number of guesses needed is 1, if we hit in the first guess, and the maximum number is n-1 (if all those guesses were wrong, we know the answer without having to guess the nth time), and the average number of guesses needed will be n/2. Even better would be to bisect the remaining candidates in each round. That is, first ask if the number is smaller than or equal to n/2. If that returns a 'yes', then ask, if the number is smaller than or equal to n/4, and if it returns a 'no', then ask , if the number is smaller than or equal to 3n/4. And so on. The minimum number of guesses needed will be log n rounded down, and the maximum number will be log n rounded up, and the average number of guesses needed will be log n.
In a situation such as my example with guessing a number, it would be appropriate to talk about entropy, because there is an initial uncertainty concerning which number is chosen, and the information provided by the answer to each guess reduces the uncertainty. But in Spetner's example, this is not the case. If there is some uncertainty, it would be the experimenter's uncertainty concerning the number of substrates among the chosen substrates the enzyme can catalyze. Without any other prior information, this could be from 0 to n, and assuming equal probability, that would give an entropy of log (n+1), not log n.
In his article, Spetner moves on to giving more details about an example from Chapter 5 of Not By Chance!:
Ribitol is a naturally occurring sugar that some soil bacteria can normally metabolize, and ribitol dehydrogenase is the enzyme that catalyzes the first step in its metabolism. Xylitol is a sugar very similar in structure to ribitol, but does not occur in nature. Bacteria cannot normally live on xylitol, but when a large population of them were cultured on only xylitol, mutants appeared that were able to metabolize it. The wild-type enzyme was found to have a small activity on xylitol, but not large enough for the bacteria to live on xylitol alone. The mutant enzyme had an activity large enough to permit the bacterium to live on xylitol alone.
As Spetner explains, the mutant bacterium had gained an increased activity on xylitol at the expense of a decreased activity on ribitol. But Spetner warns against seeing this as evidence for the neo-Darwinist position:
An evolutionist would be tempted to see here the beginning of a trend. He might be inclined to jump to the conclusion that with a series of many mutations of this kind, one after another, evolution could produce an enzyme that would have a high activity on xylitol and a low, or zero, activity on ribitol. Now wouldn’t that be a useful thing for a bacterium that had only xylitol available and no ribitol? Such a series would produce the kind of evolutionary change NDT calls for. It would be an example of the kind of series that would support NDT. The series would have to consist of mutations that would, step by step, lower the activity of the enzyme on the first substrate while increasing it on the second.
Where is the problem? According to Spetner, the problem is that not enough data is considered. As he explains, experiments indicated that the mutant also had increased catalyzation of L-arabitol relative to the wild bacterium, and:
With the additional data on L-arabitol, a different picture emerges. No longer do we see the mutation just swinging the activity away from ribitol and toward xylitol. We see instead a general lowering of the selectivity of the enzyme over the set of substrates.
That is, the mutant bacterium has a lowered activity on ribitol and an increased activity on both of the other sugars. Concluding, Spetner writes:
In Fig. 1 [comparison of activity on ribitol and xylitol] alone, there appears to be a trend evolving an enzyme with a high activity on xylitol and a low activity on ribitol. But Fig. 2 [activity on L-aribitol added] shows that such an extrapolation is unwarranted. It shows instead a much different trend. An extrapolation of the trend that appears in Fig. 2 would indicate that a series of such mutations could result in an enzyme that had no selectivity at all, but exhibited the same low activity on a wide set of substrates.
And then the punch line:
The point to be made from this example is that conclusion jumping from the observation of an apparent trend is a risky business. From a little data, the mutation appears to add information to the enzyme. From a little more data, the mutation appears to be degrading the enzyme’s specificity and losing information.
Well, not exactly, it is only a consequence of Spetner's way of seeing things. The mutation has increased the ability of the enzyme in the mutant bacterium to catalyze other sugars than ribitol, which certainly is quite usefull, if there is no ribitol around, and one of the other sugars is available. Whether it adds information to the enzyme or not depends on, how we measure information and is not really of relevance for evolution. Evolution is not about adding information.
However, Spetner sees it that way and showa that the the mutant has lost information:
Just as we calculated information in the two special cases above, we can calculate the information in the enzyme acting on a uniform mixture of the three substrates for both the wild type and the mutant enzyme. Using the measured activity values reported by Burleigh et al. we find the information in the specificities of the two enzymes to be 0.74 and 0.38 bits respectively. The information in the wild-type enzyme then turns out to be about twice that of the mutant.
Yes, with Spetner's information metric; but as noted above, that metric is Spetner's own artificial product of the way in which these experiments are carried out and has nothing to do with any intrinsic information in the enzyme.
Spetner quotes from the exchange with Max:
Max: I want to make it clear that I don’t buy your interpretation of certain specific mutations as reflecting a ‘loss of information.’ You state that the ‘information content of an enzyme is the sum of many parts, among which are: level of catalytic activity, specificity with respect to the substrate, strength [and specificity] of binding to cell structure, [and] specificity of the amino-acid sequence devoted to specifying the enzyme for degradation.’ This formulation is vague, non-quantitative, not supported by clear logic, not accepted in the scientific literature (to the best of my knowledge; please educate me if I am wrong), and in my view not useful.
To which Spetner comments:
Ed, the level of your argument here is quite low. You have seen this entire section (above), and you took from the introduction my list of what characteristics can contribute to the information content of an enzyme and criticized it for being non-quantitative (followed by other pejorative epithets). Is that supposed to be some sort of debating tactic? In any case, the tactic is out of place in this discussion. From the context of what I wrote, it should have been clear to you that this partial list of characteristics that can contribute to the information in an enzyme was an introduction to my quantitative estimate of one of the characteristics of specificity of an enzyme. After I showed how one might calculate the information related to a type of specificity, I showed how a mutation that appeared to enhance activity on a new substrate actually reduced the information by about 50%.
The problem here is that Spetner's "quantitative estimate of one of the characteristics of specificity of an enzyme" doesn't work. It is quantitative indeed, but the quanta will vary depending on the number of substrates (the value of n in Spetner's examples) and therefore it can hardly be considered to prove anything.
Back to Spetner:
It is elementary that specificity translates into information and vice versa. Have you ever played 20 questions? With the YES/NO answers to 20 judicious questions, one can discover a previously-chosen number between 1 and a million. If the questions are well chosen, those YES/NO answers can be worth one bit of information each, and 20 bits can specify one object out of a million. Twenty bits of information translates to specificity of one part in a million. Ten bits—to one part in a thousand.
Well, yes and no. Does specificity translate into information and vice versa? As Spetner writes here, it takes more bits (on the average) to specify an object out of a larger collection than out of a smaller collection, but that's exactly the problem: it's relative to collection size. If we were to guess a number between 1 and a million, we would need 20 guesses, even if that number is less than or equal to one thousand. That is, the information required is not related to the number chosen, but to the size of the collection, from which the number is drawn.
Later, Spetner quotes Max for a critique of Spetner's example with streptomycin. Here Spetner comments:
The wild-type S12 proteins that bind to the streptomycin molecule also form a subset of the universe of all possible S12 proteins. The set of S12 proteins that allow bacterial growth in streptomycin (i.e. that do not bind to the antibiotic) form a disparate subset of the universe of S12 proteins. My intuition tells me that the set that binds (the susceptible set) is smaller, and therefore has a smaller entropy, than the set that does not bind (the resistant set). Mutations that appear in the presence of the antibiotic convert one subset to the other. A mutation that transfers the enzyme from a low-entropy set to a higher-entropy set loses information; it does not gain it.
Ok, if we were one day to guess a number between 1 and 1,000, we would have an entropy of 10 bits, and if we were the next day to guess a number between 1,001 and 1,001,000, we would have an entropy of 20 bits, therefore we have lost information from the first day to the second.
(Note: I am aware that 210 = 1,024, not 1,000, and I am aware that 220 = 1,048,576, not 1,000,000. And I am sure that Spetner is aware of this as well.)
An obvious problem with Spetner's argumentation here is that a potential S12 protein need not be a possible S12 protein. If we toss a coin, we generate samples as we toss along. But that's not how S12 proteins are sampled. Of course, if we were to produce them synthetically, the situation might ne more like coin tossing; but synthetically produced S12 proteins would not be of relevance here. However, since I have no idea about how many S12 proteins of any kind there are, I will not pursue that subject any further.
Another problem is that Spetner's use of 'entropy' is rather confusing here. He considers a smaller set to have a smaller entropy, which would be in analogy to the example with guessing numbers.
In Not By Chance!, Spetner uses an example with specification of a room in an apartment in a building to illustrate specificity of addressing; but in the article he uses another example:
The Zip codes in the US also demonstrate that specificity and information are two sides of the same coin and go hand in hand. An address in the United States can be completely specified by the nine-digit zip code. One digit of information will narrow down the address from being anywhere in the United States to being in just a few states. Thus if the first digit is a 6, the address is located somewhere in Illinois, Missouri, Kansas, or Nebraska.
A second digit of information will add specificity by narrowing down the address further. A 3, 4, or 5 in the second digit puts the address in Missouri. A 3 in the second digit puts it in the eastern portion of the state. Two digits of information are more specific than one.
A third digit of information is still more specific, narrowing down the address even more, making it still more specific. If the third digit is a 1, the address is specific to St. Louis and its suburbs. The next two digits of information pin down the address to within a few blocks. The last 4 digits of information can locate a specific building. Thus, it is clear that the information contained in the digits of the zip code translate into specificity.
There is no question about it: SPECIFICITY = INFORMATION.
That is, for Spetner, more digits = more specificity = more information. What goes for digits, goes for bits (= binary digits) as well. Assuming there to be 1,000 binding S12 proteins, each of these could be specified with 10 bits, and assuming there to be 1,000,000 non-binding S12 proteins, each of these could be specified with 20 bits. That is, more bits would be required to specify any non-binding protein than to specify any binding protein, and therefore, assuming Spetner's intuition about there being more possible non-binding than binding S12 proteins, then transfering a protein from the binding group to the non-binding group increases information.
I would guess that what goes wrong is that Spetner thinks about it this way: with the Zip code, each additional digit narrows down the area addressed by the Zip code. That is, more information = more digits = smaller area. However, that only works for subsets. St. Louis and its suburbs is not only a smaller area than the eastern part of Missouri, it is a sub-area. But the binding proteins are not a subset of the non-binding proteins, so the analogy doesn't work.
Returning to Wieland's review, Wieland introduces Spetner with these words:
Jewish scientist Dr Lee Spetner’s book aims a death-blow at the heart of this whole Neo-Darwinian story. The crucial battleground has always been the origin of information, and in this field, Spetner is uniquely qualified to comment. With a Ph.D. in physics from MIT, Spetner taught information and communication theory for years at Johns Hopkins University. In 1962 he accepted a fellowship in biophysics at that institution, where he worked on solving problems in signal/noise relationships in DNA electron micrographs. He subsequently became fascinated with evolutionary theory, and published papers concerning theoretical and mathematical biology in prestigious journals such as the Journal of Theoretical Biology, Nature, and the Proceedings of the 2nd International Congress of Biophysics.
These are fine credentials, and I am certainly not claiming that Lee Spetner doesn't have excellent qualifications. However, for some reason, his use of information theory doesn't quite work. Spetner commits too many simple errors in that use, and considering his qualifications, I would suggest it is because he has let his wish of disproving neo-Darwinism get the better of him. I am quite sure Spetner could have done better than he has.
Wieland ends his review with the words:
To say that Spetner’s book is an absolute ‘must’ for anyone defending Scripture in this increasingly educated age is an understatement. To put it succinctly, it seems that unless evolutionists can pull a brand new rabbit out of the hat, Spetner has just blown the whole evolutionary mechanism out of the water once and for all. The evolutionary/humanist establishment cannot allow this to happen, of course, so it will be interesting to see their reaction and attempts at damage control. I trust that readers of this review will make it as hard as possible for them to ignore this groundbreaking work, by spreading it as far and fast as they can.
Hyperbolic language comes cheap, doesn't it?
To finish off, I must admit that I enjoyed reading Not By Chance!, even if I disagree with Spetner's conclusion. And I must say that the book would have deserved a more thoeough review from the creationist camp than Wieland's review that is little more than standard creationist troop mustering that doesn't even mention the parts of the book that are really good.