t f p g+ YouTube icon

Evolution and the Origin of Biological Information, Part 1: Intelligent Design

Bookmark and Share

March 10, 2011 Tags: Design

Today's entry was written by Dennis Venema. You can read more about what we believe here.

Evolution and the Origin of Biological Information, Part 1: Intelligent Design

One prominent antievolutionary argument put forward by the Intelligent Design Movement (IDM) is that significant amounts of biological information cannot be created through evolutionary mechanisms – processes such as random mutation and natural selection. ID proponent and structural biologist Doug Axe frames the argument this way (his comments begin at approx. 15:19 in the video):

“Basically every gene, every new protein fold… there is nothing of significance that we can show [that] can be had in that gradualistic way. It’s all a mirage. None of it happens that way.”

The importance of this line of argumentation for the IDM can be seen clearly in Stephen Meyer’s book Signature in the Cell (published in 2009). In this book, Meyer claims that an intelligent agent is responsible for the information we observe in DNA because, in his words, natural mechanisms “will not suffice” to explain it:

Since the case for intelligent design as the best explanation for the origin of biological information necessary to build novel forms of life depends, in part, upon the claim that functional (information-rich) genes and proteins cannot be explained by random mutation and natural selection, this design hypothesis implies that selection and mutation will not suffice to produce genetic information … (p. 495)

It’s hard to overstate the importance of this argument for Meyer in Signature, and for the IDM as a whole. In the conclusion to a pivotal chapter entitled “The Best Explanation” Meyer presents the following summary of his case:

Since the intelligent-design hypothesis meets both the causal-adequacy and causal-existence criteria of a best explanation, and since no other competing explanation meets these conditions as well –or at all–it follows that the design hypothesis provides the best, most causally adequate explanation of the origin of the information necessary to produce the first life on earth. Indeed, our uniform experience affirms that specified information … always arises from an intelligent source, from a mind, and not a strictly material process. So the discovery of the specified digital information in the DNA molecule provides strong grounds for inferring that intelligence played a role in the origin of DNA. Indeed, whenever we find specified information and we know the causal story of how that information arose, we always find that it arose from an intelligent source. It follows that the best, most causally adequate explanation for the origin of the specified, digitally encoded information in DNA is that it too had an intelligent source. (p. 347)

Put more simply, Meyer claims that if we see specified information, we infer design, since we know of no mechanism that can produce specified information through an unintelligent, natural process. As a logical argument, Meyer’s position only works if (and this is a big if) – his premises are correct.

The issue is that Meyer’s case is open to refutation by counterexample, and even one counterexample would suffice. If any natural mechanism can be shown to produce “functional, information-rich genes and proteins”, then intelligent design is no longer the best explanation for the origin of information we observe in DNA, by Meyer’s own stated criteria. His entire (500+ page) argument would simply unravel.

The obvious problem for Meyer’s case is that biologists are well aware of a natural mechanism that does add functional, specified information to DNA sequences (and in some cases, creates new genes de novo): natural selection acting on genetic variation produced through random mutation. Not only are biologists aware of some examples of natural selection adding functional information to DNA, this effect has been observed time and again, and in some cases it has documented in exquisite detail. When I reviewed Signature for the American Scientific Affiliation journal Perspectives on Science and Christian Faith (PSCF) what struck me, repeatedly, was that Meyer made no mention of the evidence for natural selection as a mechanism to increase biological information. I fully expected him to dispute the evidence, certainly – but the surprise for me was that he simply denied it to be sufficient without addressing any evidence. The closest Meyer comes in addressing natural selection in Signature is in a section discussing evolutionary algorithms used to simulate evolution. As I said in my review:

Meyer’s denial of random mutation and natural selection as an information generator notwithstanding, in a discussion about evolutionary computer simulations, Meyer makes the following claim:

If computer simulations demonstrate anything, they subtly demonstrate the need for an intelligent agent to elect some options and exclude others- that is, to create information.

Employing this argument, Meyer claims that any mechanism that prefers one variant over another creates information. As such, the ample experimental evidence for natural selection as a mechanism to favor certain variants over others certainly qualifies as such a generator. Meyer, however, makes no mention of the evidence for natural selection in the book.(pp. 278-279)

In the PSCF review I went on to point out a few examples of known instances in biology where random mutation and natural selection have indeed led to substantial increases in biological information, but the limitations of space in that format precluded me from exploring those examples in more detail, or from presenting that information at a level readily accessible to non-specialists. In this series of posts I will attempt to remedy that shortcoming by exploring several examples in depth. The question of how new specified information arises in DNA, far from being an “enigma”, is one of great interest to biologists. While the IDM avoids this evidence to present a flawed argument for design, responding to this flawed argument provides an excellent opportunity to discuss some particularly elegant experiments in this area.

Of course, it should be noted that describing how specified information can arise through natural means does not in any way imply God’s absence from the process. After all, natural processes are equally a manifestation of God’s activity as what one would call supernatural events. So-called “natural” laws are what Christians understand to be a description of the ongoing, regular and repeatable activity of God. As such, the dichotomy presented in ID writings of “naturalism” versus theism is a false one: is not God the Author of nature, after all?

In the next post in this series, we will examine an ongoing experiment over twenty years in the making: the Long Term Evolution Experiment (LTEE) on E. Coli conducted in the laboratory of Richard Lenski at Michigan State University.


Dennis Venema is professor of biology at Trinity Western University in Langley, British Columbia. He holds a B.Sc. (with Honors) from the University of British Columbia (1996), and received his Ph.D. from the University of British Columbia in 2003. His research is focused on the genetics of pattern formation and signaling using the common fruit fly Drosophila melanogaster as a model organism. Dennis is a gifted thinker and writer on matters of science and faith, but also an award-winning biology teacher—he won the 2008 College Biology Teaching Award from the National Association of Biology Teachers. He and his family enjoy numerous outdoor activities that the Canadian Pacific coast region has to offer. Dennis writes regularly for the BioLogos Forum about the biological evidence for evolution.

Next post in series >


View the archived discussion of this post

This article is now closed for new comments. The archived comments are shown below.

Loading...
Page 4 of 9   « 1 2 3 4 5 6 7 »
Gregory - #54283

March 14th 2011

Congratulations to Dennis and his Trinity Western University Spartans - national runners-up in the men’s CIS basketball national tournament 2011! Quite a sports program the small private Christian university in Langley, B.C. is building.

This was the first time Trinity Western had even qualified for the national tournament. And they knocked off my #1 ranked T-Birds on the way to the final!

(& for you USAmericans who don’t know what CIS means or where the ‘other Langley’ is actually located, don’t feel left out, there are several US nationals on the team also, including one of the two star players. At least you can appreciate the David / Goliath factor.)


Alan Fox - #54290

March 14th 2011

Nedbrek and Don Johnson:


You both seem to be referring me to Shannon information, which is a useful way of looking at transmission of messages etc.

This Wikipedia article


does actually list some interesting forward links viz


that goes into the semantics in great depth.

But this is somewhat beside the point. There appears to be an attempt by some, which has so far failed (in my view) to establish “information” (CSI, FCSI, Active info, DFCSI and I’m sure I’ve omitted some) as a valid concept that has some measurable property and that relates in some way to living organisms and their structure and function. It’s bunk. Convince me otherwise.



nedbrek - #54300

March 14th 2011

Alan, are you suggesting DNA does not encode information in the Shannon’s law sense?  Or, are you saying that Shannon’s law is bunk?


John - #54320

March 14th 2011

Rich:

I have not only read Meyer’s book, but have read it with minute attention to every word.  The question that occurs to me is:  have YOU read it?”

Given your assertion, maybe you can explain the truth or falsehood of the following two claims in the book. If you agree that either one is false, maybe you can explain whether the false claim(s) represent errors or deceptions.

p.128

“A protein within the ribosome known as a peptidyl transferase then catalyzes a polymerization (linking) reaction involving the two (tRNA-borne) amino acids.”

p.298 [regarding the RNA World hypothesis]

“According to this model, these RNA enzymes eventually were replaced by the more efficient proteins that perform enzymatic functions in modern cells.”


Then, maybe you can explain the reasoning and evidence behind your astonishing claim that the nature of peptidyl transferase has no relevance whatsoever to the RNA World hypothesis.


John - #54321

March 14th 2011

sy wrote:
“To be more specific RNA world contained a limited set of information, but was inefficient at the production of new proteins.”

But why would that inefficiency matter?

“Only a DNA based coding system has the power to produce a complex living cell.”

On what basis do you make such a claim, and why are cells relevant here? It seems to me that cells are a red herring in Meyer’s book (even in the title)—that he is counting on his readers’ inability to understand the idea of noncellular life. This, in my opinion, is why he engages in so much deception on the most basic aspects of the RNA World hypothesis, particularly withholding the most relevant evidence from his readers.

“But a genetic code is in fact a highly symbolic, strongly informational concept that does not appear to be easily derivable from an RNA based “life” form.”

Sy, there is nothing whatsoever that is symbolic about the genetic code, except humans’ representation of it as letters. It is entirely mechanistic.


Ashe - #54342

March 14th 2011

The way I see it, there are certain elements of order within the genetic code which argue against arbitrariness, but the symmetries are not as straight-forward as the symmetry within, say, the periodic table of elements. I once played sudoku with the genetic code late one night waiting for the power in my building to come back on.


Alan Fox - #54347

March 14th 2011

Alan, are you suggesting DNA does not encode information in the Shannon’s law sense?  Or, are you saying that Shannon’s law is bunk?


Yes to your first; no to your second.

Alajn Fox - #54348

March 14th 2011

PS 


Dembski’s “law” of conservation of information is bunk.

nedbrek - #54350

March 14th 2011

Alan, I’m not sure how one would show that DNA does or does not encode information in the Shannon sense.  At a glance, it seems absurd that a set of instructions more complicated than the most sophisticated operation system or database would come from environmental noise.  That would violate every engineering principle (I wonder how much of the evolution debate is the old engineers vs. scientists fight).

I’ve never read Dembski, where does he talk about “conservation of information”?  Information can be created through applied intelligence, and is destroyed by noise.


R Hampton - #54353

March 14th 2011

Dembski’s law of conservation of information:
http://en.wikipedia.org/wiki/Specified_complexity#Law_of_conservation_of_information


nedbrek - #54354

March 14th 2011

Ok, I’ve thought about it some more…

You cannot say DNA is information (in the Shannon sense) - just as you cannot say that an apple is a unit in the sense of 1+1=2 (therefore one apple and another apple is two apples).  The apples are not identical and interchangeable (while the numeral one is).

In this sense, mathematics can only apply to reality by analogy.

Of course, by analogy, DNA is subject to Shannon’s law.  The reproduction process is subject to noise (which results in copy error).  The fact that copy errors are often serious (cancer, birth defects, etc) and that the system has functions for error correction (which makes it even more complex) is evidence that DNA is information.


nedbrek - #54356

March 14th 2011

R Hampton, thank you for the link.  I believe I agree with Dembski on this point, however, I am uncertain how one would go about proving such a “law”.  It is restating the origins argument, which is entirely untestable and unrepeatable.


R Hampton - #54357

March 14th 2011

nedbrek,

I can’t speak for the Math, but the logical arguments of this paper are worth considering:

To a Mathematical Theory of Evolution and Biological Creativity, Gregory
Chaitin, Centre for Discrete Mathematics and Theoretical Computer
Science - 391, September 2010

http://www.cs.auckland.ac.nz/CDMTCS//researchreports/391greg.pdf



We present an information-theoretic analysis of Darwin’s theory of evolution, modeled as a hill-climbing algorithm on a fitness landscape. Our space of possible organisms consists of computer programs, which are subjected to random mutations. We study the random walk of increasing fitness made by a single mutating organism. In two different models we are able to show that evolution will occur and to characterize the rate of evolutionary progress, i.e., the rate of biological creativity.


nedbrek - #54362

March 14th 2011

Thanks for the paper ref, I will check it tonight.

Be warned, I am not a fan of the genetic algorithm, I believe it is grossly inefficient - I am a proponent of simulated annealing


nedbrek - #54384

March 14th 2011

Not the worst paper I’ve ever read (or deeply skimmed).  I get the impression they haven’t written any actual code…
“a particularly economical Turing oracle for the halting problem”
If they have a solution to the halting problem, everyone would be interested in it! (It seems, they are a saying, “a solution exists”, which isn’t interesting).

The important thing to remember about random search (which includes all sorts of hill climbing, genetic algorithm (GA), and simulated annealing (SA)), is that an infinite amount of time will yield the optimal solution.  The question becomes, how well can you do with less than infinite time!

Hill climbing suffers from the problem of local maxima (or minima, depending on how you write your cost and selection functions).  Simulated annealing can overcome this.

Simulated annealing is used in industry (place and route for semiconductor designs).  It is also, unlike GA, based on real, observed physics.


John - #54386

March 14th 2011

Bilbo:

“Meyer at least gives the appearance of evaluating all the current theories of how life got started by non-intelligent means, and concluding that they all fail.   I’ll let others determine how adequately he succeeded in that evaluation.”

Why would you leave it to others, Bilbo? What are you afraid of learning? 

For example, take Meyer’s attempted takedown of the RNA World hypothesis (there are no actual theories yet). Not only does he misrepresent the hypothesis itself, he has to then compliment that by withholding the most powerful evidence supporting the hypothesis from his readers. 

No one familiar with that Nobel-winning evidence would ever call that an “evaluation.”

R Hampton - #54392

March 14th 2011

The question becomes, how well can you do with less than infinite time

Well, that was a major thrust of the paper. They tested two approaches; an exhaustive search within natural selection (2N) wherein “each successive bit of creativity takes twice as long as the previous bit did,” and intelligent selection (N) wherein “each successive bit of creativity takes about as long as the previous bit did.” Turns out the rate of evolution found was (N2) wherein “each successive bit of creativity takes an amount of time which increases linearly.” So you don’t need anything like an infinite amount of time for life to evolve as we observe.


R Hampton - #54393

March 14th 2011

Nedbrek,
Granted that you have an preference for simulated annealing, but I hope you recognize that AiT directly addresses arguments Meyer and Dembski make using (Shannon) Classical Information Theory vis-a-vis Kolmogorov
Complexity.


Rich - #54403

March 14th 2011

Dennis:

I put some effort into reply #54202 on March 13th, making what I think are reasonable concessions to your point while crafting a hopefully articulate reply.  Both Gregory and Sy seem to have found my effort useful.  I hope you will take the time to read that reply, and respond.


nedbrek - #54431

March 15th 2011

R Hampton, after sleeping on this, I believe that the authors are claiming P = NP
(http://en.wikipedia.org/wiki/P_versus_NP_problem)
That is, they are searching an exponential space (2^n, where n is the program size), and doing so in polynomial time (n^2).

Personally, I believe P != NP, but if anyone can prove P = NP, there is a million dollar reward!


Page 4 of 9   « 1 2 3 4 5 6 7 »