t f p g+ YouTube icon

The Origin of Biological Information, Part 2: E. Coli vs. ID

Bookmark and Share

March 24, 2011 Tags: Design

Today's entry was written by Dennis Venema. You can read more about what we believe here.

The Origin of Biological Information, Part 2: E. Coli vs. ID
If your heart is right, then every creature is a mirror of life to you and a book of holy learning, for there is no creature - no matter how tiny or how lowly - that does not reveal God’s goodness.

Thomas a Kempis - Of the Imitation of Christ (c.1420)

In the first post in this series , we explored the claim made by Stephen Meyer, a leader in the Intelligent Design Movement (IDM), that “specified, complex information” cannot arise through natural means. This is crucial to Meyer’s argument, since any natural mechanism that can be shown to produce information would render his argument that information only arises from intelligent sources null and void.

A second member of the IDM who frequently makes this argument is Douglas Axe, a researcher at the Biologic Institute. Axe’s specialty is in protein structure / function relationships, and he has published a few papers in this area in the mainstream scientific literature. Axe’s work also forms the basis for Meyer’s arguments in this area in his book Signature in the Cell. I met Axe a few years ago when I gave a presentation at Baylor, and again last year in Austin for the Vibrant Dance conference (for whatever reason, it seems we only cross paths in Texas). Axe was present in the audience for a discussion session I shared with Richard Sternberg, and we had a significant amount of back-and-forth. As such, I am familiar with his line of argument, and it matches what we saw previously in Signature (as one might expect, since Meyer bases his work on Axe).

Perhaps the best summary of Axe’s argument is his quote I highlighted previously (begins approx. 15:19):

“Basically every gene, every new protein fold… there is nothing of significance that we can show [that] can be had in that gradualistic way. It’s all a mirage. None of it happens that way.”

One of the interesting features of the IDM is that though it has not yet brought forward strong hypotheses with which to test ID, it frequently makes testable predictions about natural processes. Specifically, Axe’s hypothesis is that mutation and natural selection will be unable to produce anything significant in a gradual way.

Has natural selection been Axed?

The ideal way to test this hypothesis, of course, would be to follow a population of organisms over thousands of generations and track any genetic changes that occur to see if they result in any new functions. Even better would be the ability to determine the precise molecular mutations that brought about these changes, and compare the offspring side-by-side with their ancestors. An experiment with this level of detail might sound too good to be true, but one of exactly this sort has been going on since the late 1980s, studying the bacterium, E. Coli. It’s called the Long Term Evolution Experiment (LTEE), and it’s the brainchild of Dr Richard Lenski at Michigan State University.

The LTEE started in 1988 with twelve populations of E. Coli all derived from one ancestral cell. The design of the experiment is straightforward: each day, each of the twelve cultures grow in 10ml of liquid medium with glucose as the limiting resource. In this medium, the bacteria compete to replicate for about seven generations and then stop dividing once the food runs out. After 24 hours, 1/10th of a ml of each culture is transferred to 9.9 ml of fresh food, and the cycle repeats itself. Every so often, the remaining 9.9 ml of leftover bacterial culture is frozen down to preserve a sample of the population at that point in time – with the proper treatment, bacteria can survive for decades in suspended animation. Early in the experiment this was done every 100 generations, and later this was shifted to every 500 generations. A significant feature of the LTEE is that these frozen ancestors can be brought to life again for comparison with their evolved descendants: in essence, the freezers in the Lenski lab are a nearly perfect “living fossil record” of the experiment.

It is important to note several things about the LTEE. First, there is no artificial selection taking place. The environment for the bacteria is kept constant: the same food, the same temperature and the same dilution routine are maintained each day. Second, the bacteria in the experiment are asexual: this means that genetic recombination, a hugely important source of genetic variation in sexual organisms, is absent. New genetic combinations in the LTEE must arise solely by mutation. Third, the bacterial populations that started the experiment are unlike any natural population, since they are all identical clones of each other. (In other words, genetic variation in the original 12 cultures was essentially zero). While natural populations have genetic variation to draw on, these twelve cultures started from scratch.

Since its inception, the twelve cultures have gone their separate ways for over 50,000 generations. Early on, the cultures quickly adapted to their new environment, with variants in each population arising and outcompeting others. In order to confirm that the new variants indeed represented increases in function (and thus, an increase in “information”) the evolved variants were tested head-to-head against their revivified ancestors. Numerous papers from the Lenski group have documented these changes in great detail. What was remarkable about the early work from the Lenski group was that tracking the 12 cultures showed that evolution in the different populations was both contingent and convergent: similar, but not identical, mutations appeared in many of the lines, and the different populations had similar, but not identical, increases in fitness relative to the ancestral populations. In the details, evolution was contingent, but overall, the pattern was convergent. As Lenski puts it:

To my surprise, evolution was pretty repeatable. All 12 populations improved quickly early on, then more slowly as the generations ticked by. Despite substantial fitness gains compared to the common ancestor, the performance of the evolved lines relative to each other hardly diverged. As we looked for other changes—and the “we” grew as outstanding students and collaborators put their brains and hands to work on this experiment—the generations flew by. We observed changes in the size and shape of the bacterial cells, in their food preferences, and in their genes. Although the lineages certainly diverged in many details, I was struck by the parallel trajectories of their evolution, with similar changes in so many phenotypic traits and even gene sequences that we examined.

In other words, there were many possible genetic states of higher fitness available to the original strain, and random mutation and natural selection had explored several paths, all leading to a higher amount of “specified information” – information that specifies increased reproduction and survival in the original environment. All this was by demonstrably natural mechanisms, with a complete history of the relevant mutations, the relative advantages they conferred, and the dynamics of how those variants spread through a population. The LTEE is at once a very simple experiment, and an incredibly detailed window into the inner workings of evolution.

And so the work continued, day in and day out, for years – until one day, a completely new biological function showed up in one of the cultures.

One of the defining features of E. Coli is that it is unable to use citrate as a food source. The food used to culture the strains, however, has a large amount of citrate in it – a potential food source that remained beyond the reach of the evolving strains. For tens of thousands of generations, no variants arose that could make use of this potential resource – even though every possible single DNA letter mutation (and every possible double mutation combination) had been “tested” at some point along the way. There seemed no way to for the populations to generate “specified information” to use citrate as a food source – they couldn’t “get there from here.” Then one day, the fateful change occurred in one of the 12 populations. Lenski puts it this way:

Although glucose is the only sugar in their environment, another source of energy, a compound called citrate, was also there all along as part of an old microbiological recipe. One of the defining features of E. coli as a species is that it can’t grow on citrate because it’s unable to transport citrate into the cell. For 15 years, billions of mutations were tested in every population, but none produced a cell that could exploit this opening. It was as though the bacteria ate dinner and went straight to bed, without realizing a dessert was there waiting for them.

But in 2003, a mutant tasted the forbidden fruit. And it was good, very good.

Details, details

Tracking down the nature of this dramatic change led to some interesting findings. The ability to use citrate as a food source did not arise in a single step, but rather as a series of steps, some of which are separated by thousands of generations:

  1. The first step is a mutation that arose at around generation 20,000. This mutation on its own does not allow the bacteria to use citrate, but without this mutation in place, later generations cannot evolve the ability to use citrate. Lenski and colleagues were careful to determine that this mutation is not simply a mutation that increases the background mutation rate. In other words, a portion of what later becomes “specified information for using citrate” arises thousands of generations before citrate is ever used.

  2. The earliest mutants that can use citrate as a food source do so very, very poorly – once they use up the available glucose, they take a long time to switch over to using citrate. These “early adopters” are a tiny fraction of the overall population. The “specified information for using citrate” at this stage is pretty poor.

  3. Once the (poor) ability to use citrate shows up, other mutations arise that greatly improve this new ability. Soon, bacteria that use citrate dominate the population. The “specified information for using citrate” has now been honed by further mutation and natural selection.

  4. Despite the “takeover”, a fraction of the population unable to use citrate persists as a minority. These cells eke out a living by being “glucose specialists” – they are better at using up glucose rapidly and then going into stasis before the slightly slower citrate-eaters catch up. So, new “specified information to get the glucose quickly before those pesky citrate-eaters do” allows these bacteria to survive. As such, the two lineages in this population have partitioned the available resources and now occupy two different ecological niches in the same environment. As such, they are well on their way to becoming different bacterial species.

Don’t tell the bacteria

The significance of these experiments for the Intelligent Design Movement is clear. Complex, specified information can indeed arise through natural mechanisms; it does not need to arise all at once, but rather accrue over thousands of generations; independent mutations that do not confer a specific advantage can later combine with other mutations to produce new functions; new functions can be quite inefficient when they arise and then be honed through further mutations and selection; and the entire process can occur without ever reducing the fitness of a specific lineage within a population. Moreover, these findings have been demonstrated with a full historical record of the genetic changes involved for the entire population they occurred in, as well as full knowledge of their fitness at every step along the way.

In other words, what the IDM claims is impossible, these “tiny and lowly” organisms have simply been doing – and it only took 15 years in a single lab in Michigan. Imagine what could happen over 3,500,000,000 years over millions of square miles of the earth’s surface.

In the next post in this series, we will look at an example of new information and function arising during vertebrate evolution: the elegant work of the Thornton lab on steroid hormones and their protein receptors.


Dennis Venema is professor of biology at Trinity Western University in Langley, British Columbia. He holds a B.Sc. (with Honors) from the University of British Columbia (1996), and received his Ph.D. from the University of British Columbia in 2003. His research is focused on the genetics of pattern formation and signaling using the common fruit fly Drosophila melanogaster as a model organism. Dennis is a gifted thinker and writer on matters of science and faith, but also an award-winning biology teacher—he won the 2008 College Biology Teaching Award from the National Association of Biology Teachers. He and his family enjoy numerous outdoor activities that the Canadian Pacific coast region has to offer. Dennis writes regularly for the BioLogos Forum about the biological evidence for evolution.

< Previous post in series Next post in series >


View the archived discussion of this post

This article is now closed for new comments. The archived comments are shown below.

Loading...
Page 6 of 7   « 3 4 5 6 7 »
Bilbo - #56265

March 31st 2011

Hi John,

I’m not “afraid” of doing the calculation, I just don’t happen to have a calculator with me at the library when I blog.  But I won’t argue the question of Behe anymore.  If you want to think I’m “desperate,” go ahead.  Behe’s real point was that if more than two proteins need to evolve before they have a function, then it won’t happen.  I don’t think anybody disagrees with that conclusion.  The question is whether three or more proteins were needed simultaneously in order to for them to have a function.  And I’m undecided on that point.   But now to the rest of your comments:

John: “Shouldn’t a hypothesis be consistent with all

the extant data to have any validity at all?”

A hypothesis gains strength the more data it can explain, and loses strength the less data it can explain.


Bilbo: “Of course, if we
find a good ID explanation for the start codon, then we have additional
evidence for the the ID hypothesis.”

John: “That’s truly insane.
Explanations aren’t evidence.”

For historical sciences, yes, explanations are evidence. 



Bilbo: “In other words, you and I have
engaged in trying to either falsify or verify ID.  Sounds a lot like
science to me.”

John: “I don’t see that you have done anything of the
sort.”

That’s true, you did all the work on this one, but I goaded you on, so I’ll take some of the credit.

John:  “Remember, it’s easy to generate testable

ID hypotheses, but
no one on your side has sufficient faith in any of them to test their
empirical predictions.



A testable hypothesis would be that the start codon has a rational and foresighted explanation.  Right now, you have offered a good, stiff challenge to that hypothesis.  I commend you.  I even think I’ll write it up at my blog.




Don Johnson - #56440

April 1st 2011

While the Lenski experiment is interesting, there is no evidence presented for a net increase in functional information, which can only be shown by a rigorous application of the appropriate formulae to the pre-mutated and post-mutated genome.  I raised this on 3/11 when I said:

I look forward to seeing any example of a net increase in functional information via mutation, as I have yet to see a verified example of such a claim. Certainly, a mutation changes the information content stored in a polynucleotide chain, which can result in an advantage under certain conditions, such as a bacterium mutating at the point where an antibiotic would attach, to produce resistance to that antibiotic. Sickle cell anemia is caused by a point mutation in the hemoglobin beta gene, which normally is detrimental due to lowered oxygen-carrying capability. Since the plasmodium parasite that causes malaria has difficulty invading cells with that mutation, those having this genetic defect are protected against malaria, which is a selective advantage in high malaria regions. The nylon-digesting bacterium is often cited as an example of information increase via mutation, especially since nylon didn’t exist before 1935. This may be due to a “frame-shift” mutation (with decoding off by one nucleotide), or more likely, an adaption of a plasmid (non-chromosomal DNA with transposable elements). Such bacteria digest nylon with only 8% efficiency, and can no longer digest the normal diet of cellulose, so a net functional information loss is evident in the genome.

I look forward to seeing your results applied to the functional information equations, most preferably is Durston’s Hf(t) = -ÓP(Xf(t)) logP(Xf(t)), where Xf denotes the conditional variable of the given sequence data (X) on the described biological function f, which is an outcome of the variable (F), using the joint variable (X,F) [Theoretical Biology and Medical Modelling, 12/6/07]. A somewhat less useful equation is Hazen’s I(Ex) = –log2[F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function Ex.[PNAS: 104-1, 5/15/07, p8574-8581]. Those equations don’t address the major information problem, however, which is accounting for the prescriptive algorithms instantiated in the DNA memory to be read by enzyme computers to produce the functional output as various control messages or the ultimate construction of a protein. Every protein is the result of a real computer program’s execution. Darwinism requires that sometimes a random modification of an existing functional program (the problems of the origination of those programs is covered in detail in my “Programming of Life” book) sometimes produces a BETTER functional program, a concept totally foreign to information science.


R Hampton - #56457

April 1st 2011

Don Johnson,

In regards to, “seeing any example of a net increase in functional
information via mutation, as I have yet to see a verified example of
such a claim”,
is it your contention that there are no new genes within the hundreds of dog breeds that have descended from ancestral wolves? If so, that’s quite a claim.

If not, then how did these new genes come into existence?  While mankind has applied artificial selection throughout most of the history of the Dog, I don’t believe we have ever undertaken genetic engineering of these breeds. Thus mutation (or some other natural mechanism) must be responsible for new dog genes. Would you agree?


Don Johnson - #56507

April 2nd 2011

“ is it your contention that there are no new genes within the hundreds of dog breeds that have descended from ancestral wolves? If so, that’s quite a claim. “
I haven’t examined the genome of dogs, but do know all breeds are of species “Canis lupus  familiaris.”  Breeds can interbreed producing fertile offspring (though it is probably unwise to breed a poodle bitch with a great Dane).  All evolution “evidence” (except for a minor gene mutation for mouse color modification at minute 39) in the 2-hour PBS Nova program “What Darwin Never Knew” ( pbs.org/​wgbh/​nova/​evolution/​darwin-never-knew.html

)  involved turning existing genes on or off via switches (no NEW information).  The evidence in that video couldn’t falsify the hypothesis “the first complex life was created with all genes ever needed; evolution is simply gene expression.”  Watch the video, and post falsification evidence from (only) the video.   I included this as an example in my video “Defeating Creationism” (http://www.vimeo.com/21367259).


R Hampton - #56809

April 4th 2011

Dogs have evloved a third gene that affects their coat color - the K locus mutation - that binds to the existing Melanocortin 1 receptor (Mc1r), resulting in the expansion of “the functional role of β-defensins, a protein family previously implicated in innate immunity.” Among wolves, this mutation is found only in North American wolves. Because they have recently hybridized with dogs, it is inferred the N.A.. wolves have inherited the gene from dogs.

Small dogs have evolved a regulatory sequence (not a gene) that affects the IGF1 gene resulting in small body sizes. The regulatory sequence is not found in medium or large dogs, or wolves.


John - #56465

April 1st 2011

Bilbo:

“I’m not “afraid” of doing the calculation, I just don’t happen to have a calculator with me at the library when I blog.”

Bilbo, you don’t need a calculator to multiply numbers with one significant digit provided in scientific notation. A calculator would make it more difficult, not less. You’re afraid that you can’t explain the answer.

“But I won’t argue the question of Behe anymore.  If you want to think I’m “desperate,” go ahead.”

Thanks, I will. It’s clear that when you perceive a threat to your beloved ID, your curiosity knob goes down to zero. I feel sorry for someone like that.

“Behe’s real point was that if more than two proteins need to evolve before they have a function, then it won’t happen.”

I’ll go with what Behe wrote, which is simply wrong, because what you wrote is so vague as to be meaningless. 

“I don’t think anybody disagrees with that conclusion.”

Par for the course—hypotheses are presented as conclusions.

“The question is whether three or more proteins were needed simultaneously in order to for them to have a function.  And I’m undecided on that point.”

It’s meaningless in the manner you have expressed it.

“A hypothesis gains strength the more data it can explain, and loses strength the less data it can explain.”

You’re closing your eyes to failure in this case.

“For historical sciences, yes, explanations are evidence.”

No, Bilbo, only someone afraid of the evidence would say something so ridiculous. There’s a whole lot more real, experimental evidence that’s consistent with evolution being more constrained at the front end than the back end of translation, but you’re afraid to learn about it. You are dead inside.

“A testable hypothesis would be that the start codon has a rational and foresighted explanation.”

What empirical predictions does that make? If you can’t state them and they aren’t empirical, it ain’t testable.

Remember, your hypothesis has to explain the clear difference between start and stop mechanisms.

“Right now, you have offered a good, stiff challenge to that hypothesis.  I commend you.  I even think I’ll write it up at my blog.”

Why would that be better than testing it for yourself?

R Hampton - #56474

April 1st 2011

Don Johnson,

I had another question about one of your claims; “Darwinism requires that sometimes a random modification
of an existing functional program (the problems of the origination of
those programs is covered in detail in my “Programming of Life” book)
sometimes produces a BETTER functional program, a concept totally foreign to information science.”
Isn’t genetic programming an example within Information Science wherein random modifications to functional programs can out-produce the progenitor?

There are now 36 instances where genetic programming has produced a human-competitive result. These
human-competitive results include 15 instances where genetic programming has
created an entity that either infringes or duplicates the functionality of a
previously patented 20th-century invention, 6 instances where
genetic programming has done the same with respect to a 21st-centry
invention, and 2 instances where genetic programming has created a patentable
new invention. These human-competitive results come from the fields of
computational molecular biology, cellular automata, sorting networks, and the synthesis of the design of both the topology and component sizing for complex
structures, such as analog electrical circuits, controllers, and antenna.
- Dr. John R. Koza, December 30, 2003

http://www.genetic-programming.com/humancompetitive.html



Don Johnson - #56509

April 2nd 2011

All genetic programs obviously invole human-engineered algorithms, fitness functions, and targets.  There is no evidence that randomly changing  existing functional programs would or even could produce new programs with increased functionality.  The behind-the-scenes controls imposed by experimenters falsly give the impression that randomness can produce useful information.  For example the Avida program has 15 non-modifiable instructions, which is a necessary for it to work at all.  I cover this in some detail in my “Programming of Life” book. 


R Hampton - #56811

April 4th 2011

You’re moving the goal posts—specifically, you claimed that;
Darwinism requires that sometimes a random modification of an existing functional program ... sometimes produces a BETTER functional program”

You did not stipulate a new program or a new function, but an improved function. Are you willing to admit that genetic programming proves that random changes to code can do exactly that?


Don Johnson - #56955

April 5th 2011

The only “improved”  functionality in genetic programming involves human-designed controls that steer the results toward functionality.  The changes can be as random as possible, but the path taken is designed.  For example, fill a computer memory with random data and start execution at a random address and expect something functional as output (even execution depends on functional hardware, but we’ll ignore that technicality).  If random changes can produce a better program, eventually one would expect a functional result from such a scenario.
For Darwinism to ever produce improvement, the programs that are executed to produce proteins and perform controls would somehow be modified by random changes to, at least occasionally, produce better functional programs.  This concept is foreign to information science, since any random changes in useful programs always are steered toward functionality by the fitness functionas and/or targets.


R Hampton - #56988

April 5th 2011

If random changes can produce a better program, eventually one would expect a functional result from such a scenario.

Selection (natural or artificial) is an important part of both Evolutionary theory and Genetic Programming (a.k.a. Evolutionary Computation).  There is no path per se—instead, fitness with respect to the environment is tested (survivability & population dynamics).

This concept is well known to “information science,” for example: IEEE Transactions on Evolutionary Computation

http://www.ieee-cis.org/pubs/tec/


Don Johnson - #57052

April 6th 2011

I certainly agree the concept is well-known, but as I’ve stated, all evolutionary/genetic algorithms have designed targets and fitness functions, without which the programs would be non-functional.  Evolution supposedly works without design/planning/forsight, so there is no relationship between “evolutionary algorithms” and evolution.  The very term involves “algorithm” which by definition is a step-by-step process for obtaining a problem solution.  Evolution is undirected, with chance producing any change at the genomic level.  If the information loss of a change allows the organism to survive, the organism’s fitness (with that loss) for reproduction would be filtered by “natural selection.”  NS is NOT a mechanism for genetic change, it only filters the existing battery of algorithms of the phenotype.  There are zero instances of random changes producing an increase in functional information (even Shannon information increase would require a mutation within a repeated DNA sequence).  If you know of any, please post the results of pre-mutated and post-mutated calculations using well-known equations for computation of functional information.


nedbrek - #57053

April 6th 2011

Don consider, for example, a compressed executable image
(self-extracting archive).  Now, copy the archive, and permute some of
the bits.  The two archives cannot be (easily) compressed below the size
of the original archive (which is one measure of information content).



This is how computer viruses avoid virus scanners.



R Hampton - #57070

April 6th 2011

Yes, but the algorithim is used as a proxy for Natural Selection, not the random code changes. Without the algorithim, mutation would continue without respect of the desireed goal since there are no environmental pressures to “direct” the results. Given the very narrow scope of most uses of Evolutionary Computation,  it is not efficient to model an environment conducive to shaping the mutations towards the desired end. Nevertheless, random code changes is the mechanism by which functionality is increased (sometimes with a net increase of information).


Don Johnson - #57096

April 6th 2011

Shannon “information” deals with compressibility, and nothing about functionality.
I’m not sure what point you’re making.


Don Johnson - #57097

April 6th 2011

It is true that evolutionary algorithms are extremely inefficient for “goal-oriented” solutions, but evolution has no goal.  To classify random changes as a “mechanism” gives the impression that something can be caused “by chance,” which has no causitive power since it is a non-entity.  Show me the results of applying the functional information formulae if you insist that a random change is capable of causing a net functional information increase. 


R Hampton - #57123

April 6th 2011

To classify random changes as a “mechanism” gives the impression that
something can be caused “by chance,” which has no causitive power since
it is a non-entity.

I don’t understand—are you saying there is no such thing as a random mutation? Randomness describes the lack of direction for the change, but causation is specific to each particular change (copying errors, chemical interference, radiation, etc. ) Hence the mechanism of random change is a catch-all label for the many kinds of mutations. In Evolutionary Computation, the program acts as the mutagen(s), but the place and type of mutation is indeed random.


Bilbo - #56597

April 3rd 2011

John:  “Bilbo,
you don’t need a calculator to multiply numbers with one significant
digit provided in scientific notation. A calculator would make it more
difficult, not less. You’re afraid that you can’t explain the answer.”

How about you just do the calculation and let those of us who are mathematically challenged know the answer?

Bilbo: “Behe’s
real point was that if more than two proteins need to evolve before
they have a function, then it won’t happen.”


John:  “I’ll
go with what Behe wrote….”

Me too:    http://telicthoughts.com/the-real-issue-of-behes-edge/

John:  “You’re closing your eyes to failure in this case.”

No, I commended you for presenting a good challenge to ID.  I even put it up at my blog:

http://bilbos1.blogspot.com/2011/03/good-challenge-to-id.html


John:  “There’s a whole lot more
real, experimental evidence that’s consistent with evolution being more
constrained at the front end than the back end of translation….”

Sounds interesting.  Tell me more.

Bilbo:  “A
testable hypothesis would be that the start codon has a rational and
foresighted explanation.”


John:  “What empirical
predictions does that make? If you can’t state them and they aren’t
empirical, it ain’t testable.”

 You’ve shown that the start codon appears to be a very inefficient system.  An ID hypothesis would need to show that there is a good reason to design it with this inefficiency.


John: “Remember, your
hypothesis has to explain the clear difference between start and stop
mechanisms.”

Sure.


Bilbo - #56936

April 5th 2011

Hi John,

Mike Gene has responded to your challenge here:

http://designmatrix.wordpress.com/2009/03/02/special-stop-codons-in-the-exceptional-code/#comment-2904


Bilbo - #56938

April 5th 2011

Mike’s reply:

Y’see the start codon has an important job not shared by any other
codon – it sets the reading frame.  If you want a
translation system that does not encode an amino acid for its start
codon, you’ll need a completely different mechanism to set the reading
frame.  Has anyone ever proposed such a mechanism?  And if so, is there
any evidence that such a system would do a better job at setting the
proper reading frame?  Until those questions can be answered, no problem
has been demonstrated with using the start codon to code for an amino
acid.

But what about your claim that “methionine often interferes with the correct folding of the protein?”  Really?  How often is often? 

The second process that removes the methionine from many proteins is known as methionine excision and depends on the protein methionine aminopeptidase.  You say that it seems needlessly inefficient, but again, think design tradeoffs and remember that efficiency is not the sole criterion of good design. 

So at this point I would direct your attention to something called the N-end rule.  Here is a review paper written by the scientist who uncovered this rule. 

http://2008.igem.org/wiki/images/a/a5/JHU_0708_paper_TheN-endRule.pdf

It basically states the N-terminal amino acid plays a key role in determining the half-life of a protein.  If you look at table 1, you’ll see that a protein whose first amino acid is methionine has a half-life of 30 hours.  If it was tyrosine, it would be 10 minutes.  And if you check out figure 1, you’ll see that methionine is considered a stabilizing residue (one of the three universal stabilizing residues).  So the start codon represents a default state for giving a protein a long lifespan.  By cutting away the methionine, the cell can in effect set the lifespan of the newly made protein that is shorter.  Looks pretty elegant to me.

But there is more meaty stuff to get out of the N end rule.


John - #56970

April 5th 2011

Bilbo:
“Mike Gene has responded to your challenge here: “

Really? Where are the empirical predictions in Mike’s alleged response?

Can you think of a major design flaw that is the consequence of methionine setting the reading frame? Or are you just relieved that you can point to hearsay and shut your own curiosity down?


Bilbo - #57093

April 6th 2011

John: “Really? Where are the empirical predictions in Mike’s alleged response?”

The prediction is that there will be an ID explanation that is rational and foresighted.

“Can you think of a major design flaw that is the consequence of methionine setting the reading frame?”

I’m not even sure what it means to “set the reading frame,” and asked Mike about it.  Do you know what it means?

“Or are you just relieved that you can point to hearsay and shut your own curiosity down?”

I posted Mike’s response here, so that you, the scientist, can read it and let me, the armchair philosopher, know if there are problems with Mike’s response.  Are there?


John - #57100

April 6th 2011

“The prediction is that there will be an ID explanation that is rational and foresighted.”

No, that’s absurd. I asked for an empirical prediction. “Empirical” in this context means free of interpretation—what you will directly see, not how you will judge it. Don’t you see that the whole prediction thing is about keeping ourselves honest and not fooling ourselves (by making predictions before we know the answer), and you and Mike are refusing to do that?

“I’m not even sure what it means to “set the reading frame,” and asked Mike about it.  Do you know what it means?”

Yes.

“I posted Mike’s response here, so that you, the scientist, can read it and let me, the armchair philosopher, know if there are problems with Mike’s response.  Are there?”


Huge ones. I am trying to get you to think critically and honestly about biology instead of running to the false security of hearsay.

“The first principle is that you must not fool yourself—and you are
the easiest person to fool. So you have to be very careful about
that. After you’ve not fooled yourself, it’s easy not to fool other
scientists. You just have to be honest in a conventional way after
that.”  —Richard Feynman

You’re shutting down your curiosity and boiling everything down to hearsay so that you can fool yourself that ID is viable.

Alan Fox - #57108

April 6th 2011

@ Bilbo


The reading frame refers to how a protein sequence is inscribed in DNA and RNA as triplets. There is no “punctuation” so the translation from DNA must start at the first nucleotide of the first triplet for the correct protein to be synthesized. If a mutation in a DNA sequence happens so that a single (or larger number not divisible by three) nucleotide is added or lost the whole subsequent sequence will be mistranslated (as all triplets code for an amino acid) until a stop codon is encountered. The methionine start codon is paramount in this process of  ensuring the accuracy of the translation process.

Bilbo - #57259

April 7th 2011

John: 
                  “No, that’s absurd.
I asked for an empirical prediction. “Empirical” in this context means
free of interpretation—what you will directly see, not how you will
judge it.”

Mike’s answer would be that determining if something is designed, when we do not independent evidence of a designer, cannot be free of interpretation.  He may be right.  But that does not mean that we cannot come to reasonable conclusions regarding design.  You have offered a challenge to the start codon being designed, based on its inefficiency.  So to meet that challenge, one must either show that it isn’t inefficient, or that there is some good design reason (a tradeoff) for allowing the inefficiency. 

Bilbo: “I’m not even sure what it means to “set the reading frame,” and asked Mike about it.  Do you know what it means?”

John: “Yes.”

Good, would you explain it to me, please?

“Huge ones. I am trying to get you to think critically and honestly about biology instead of running to the false security of hearsay.”

I’m quite willing to think critically and honestly about biology.  But I haven’t limited time to research the answers.  If you want to further this debate, you’ll have to supply the information.

“You’re shutting down your
curiosity and boiling everything down to hearsay so that you can fool
yourself that ID is viable.”

No.  I’m quite curious.  That’s why I posted your challenge at my blog.  That’s why I posted Mike’s response here.  That’s why, if you answer Mike, I will post that response at my blog, also.

                 
        <!—/uploads/static-content/comment_flag.png—> 
           
     
           


Bilbo - #57261

April 7th 2011

Alan:  ” The methionine start codon is paramount in this process of  ensuring
the accuracy of the translation process.”

Thanks, Alan, but why have a start codon that codes for an amino acid, that must then be removed before protein folding takes place?  Why couldn’t the start codon be a triplet that does not code for an amino acid, similar to the stop codons? 


John - #57272

April 7th 2011

Bilbo:

“Mike’s answer would be…”

Bilbo, you’re trying to make everything hearsay. What matters are the empirical predictions. The hypothesis that evolutionary mechanisms could design something far more intelligent for termination than they could for initiation makes empirical predictions.Why are you more curious about what people say than you are about the evidence? Can you engage your mind enough to see the empirical predictions for yourself?

“Good, would you explain it [setting the reading frame] to me, please?”

Mike’s response made no sense, as a noncoding tRNA could set the reading frame just as easily as a coding one does. 

“I’m quite willing to think critically and honestly about biology.  But I haven’t limited time to research the answers.”

So you have unlimited time?

“If you want to further this debate, you’ll have to supply the information.”

If you’re interested in truth over wishful thinking, you’ll have to engage in thinking instead of judging hearsay from a debate.


“No.  I’m quite curious.  That’s why I posted your challenge at my blog.”

That doesn’t suggest curiosity to me. Curiosity would lead you to ask me, “Why would evolution come up with a smart solution at one end but not the other? What’s the limitation on evolution that wouldn’t be a limitation for intelligent design, including front-loading?” Get it?”

Setting the reading frame” is vapid. The N-end rule is irrelevant, because designing a noncoding initiating tRNA would allow much more flexibility in regulating protein half-life than the current mechanism.

You also should know that a vast array of inefficient mechanisms are present to modify the N-termini of proteins, not just cleavage. For example, many proteins require modifications to N-terminal methionines (which are mediated by multiple enzymes) to have activity.

To see just one aspect of the problems with Mike’s “setting the reading frame,” look at the following mRNA sequence with a genetic code table:

………AUGAAUGAAUG……

In how many different reading frames could translation be initiated?

Alan Fox - #57371

April 8th 2011

Thanks, Alan, but why have a start codon that codes for an amino acid, that must then be removed before protein folding takes place?  Why couldn’t the start codon be a triplet that does not code for an amino acid, similar to the stop codons?


I don’t know, Bilbo. I, naively, see no reason why the start codon could not just “start” and methionine be coded for as and when necessary with its own codon. It might be thought that, if you accept evolution is broadly true, this is the result of a “frozen accident” and descendants of the first organism to end up with this arrangement were so successful, relative to predecessors, that they ended up as sole inheritors of the Earth. There’s no evolutionary pathway now available to substitute the apparently more logical “start only codon”. If you think a designer was at work, well, he can design however he chooses.

As to why, I don’t know that either. The metaphysical and religious explanations of why the universe, why life, why us don’t seem at all satisfactory to me.

John - #57385

April 8th 2011

I don’t know, Bilbo. I, naively, see no reason why the start codon could not just “start” and methionine be coded for as and when necessary with its own codon.”


It’s worth noting that neither of the reasons “Mike” gave work better with methionine than they would using the far more intelligent start codon that doesn’t encode any aa residue.

“It might be thought that, if you accept evolution is broadly true, this is the result of a “frozen accident” and descendants of the first organism to end up with this arrangement were so successful, relative to predecessors, that they ended up as sole inheritors of the Earth. There’s no evolutionary pathway now available to substitute the apparently more logical “start only codon”.”

And then we get into the empirical predictions of the evolutionary hypothesis that explain the difference—that there is/was a pathway available to allow the substitution of the far more logical stop codons that we have today. 

So, Bilbo, can you see the first thing that would be involved in replacement of a stop codon encoding an amino acid with one that doesn’t? Or alternatively, replacement in a system with no stop codons at all? 

“Mike” won’t be of any help to you on this. Do you see that it’s a prediction as long as YOU don’t know the answer, even if I do? Do you see why debates are worthless relative to predictions that you make and then test? That’s why I don’t feed you the answers and “Mike” does.

Bilbo - #57410

April 8th 2011

Hi Alan,

Yes, the “frozen accident” answer is the favorite non-teleological answer, I believe. 

If you think a designer was
at work, well, he can design however he chooses.”

True, but we are assuming that the designer is intelligent, rational, and foresighted.  And such a designer doesn’t design however he chooses, but in a way that exhibits intelligence, rationality and foresight.  So if our assumption is correct, then we should be able to find features about the start codon that exhibit intelligence, rationality and foresight.


Bilbo - #57412

April 8th 2011

Bilbo: ” But I haven’t limited time to research the answers.”


John:  “So
you have unlimited time?”

Oops.  Typo.  I have very limited time to research the answers.

But thanks, I think you’ve given me enough info to piece together a response from you for Mike. 


John - #57414

April 8th 2011

“But thanks, I think you’ve given me enough info to piece together a response from you for Mike.”


Bilbo, your desperation in pretending that science is a debate is amazing. What is preventing you from using your own mind?

“So if our assumption is correct, then we should be able to find features about the start codon that exhibit intelligence, rationality and foresight.”

No, Bilbo, allowing for interpretation after you get the data isn’t science. Why are you afraid of the evolutionary hypothesis that makes real empirical predictions about mechanism?

John - #57416

April 8th 2011

Hi Alan, Yes, the “frozen accident” answer is the favorite non-teleological answer, I believe.”


Bilbo, you persist in misrepresenting hypotheses as arguments and Mike’s arguments as hypotheses. 

If the start mechanism is less intelligent than the stop mechanism, there is a clear empirical prediction that can be derived from the hypothesis that the start is less changeable than the stop.
 
Again, forget about who says what and engage your own brain. Do you see that I am giving you more credit than Mike is? Hint: mutants.

Page 6 of 7   « 3 4 5 6 7 »