# Why Dembski’s Design Inference Doesn’t Work. Part 2

*
Today's entry was written by
James Bradley.
Please note the views expressed here are those of the author, not necessarily of The BioLogos Foundation.
You can read more about what we believe here.
*

This is the second in a two part series taken from a scholarly essay by James Bradley, which can be found here. In his last post, Dr. Bradley explained how Dembski’s “explanatory filter” for detecting design works. The model includes what he considers to be two fatal flaws, one involving the small probability requirement and the other involving the three way classification – regularity, chance, and design.

We at BioLogos think this essay is of great significance in potentially dispelling one of the important prongs of the Intelligent Design movement as led by fellows of the Discovery Institute. Indeed, we think it is so important that we asked Bill Dembski if he would be willing to respond on our site. We've not yet heard back from him, but we are anxious to learn whether he is willing to concede the points being made here by Dr. Bradley. The hallmark of good science is being able to concede when a particular model doesn't work anymore and then move on. We think Bill's *Design Inference* is a model in crisis. Will he be able to save it, or will he concede that it is now time take this particular mathematical approach out of the ID movement? We are anxious to know his thoughts.

In my previous post, I summarized Dembski’s explanatory filter (see figure), I suggested that it suffers from two fatal flaws: (1.) The small probability requirement and (2.) The three way classification – regularity, chance, and design. Today, I elaborate on those flaws.

First, let’s consider the small probability requirement. At one point, Dembski writes, “… a successful design inference sweeps the field clear of chance hypotheses. The design inference, in inferring design, eliminates chance entirely…” (p. 7) To understand the term “chance hypotheses,” we begin with an example. Suppose we are considering a situation in which the outcome is not predetermined but only two possible outcomes can occur – a 0 or a 1. The event we want to consider is that the situation yields a sequence of ten 1’s a row – 1111111111. Did they occur by chance or did someone arrange for only 1’s to occur – that is, did the ten 1’s occur by design? Suppose for the moment that our criterion for “small probability” is less than one chance in a thousand; Dembski uses numbers much smaller than this, but for now this will illustrate our example well.

Consider two cases. The first is the flip of a fair coin. 1 represents heads, 0 represents tails. Each is equally likely, so each has a probability equal to 1/2. The probability of ten heads in a row is (1/2)^{10}= ^{1}/1024 , which is less than one in a thousand, so our chance hypothesis (each outcome has probability 1/2) suggests we should attribute the sequence of ten 1’s to design.

But now consider a second case, dropping a thumbtack on the floor. Let’s use 0 to denote an instance in which the tack lands point up and 1 the instance where it lands point down. Suppose the particular thumbtacks we are dealing with land point down 2/3 of the time. Then the probability of ten 1’s in a row is .0173 (or about 17 in 1000), considerably more likely than the coin example. In this case, the filter suggests we should attribute the ten successive 1’s to chance.

The point is, different chance hypotheses give different results. Dembski writes, “…opposing chance to design requires that we be clear what chance processes could be operating to produce the event in question.” (p. 46) Dembski is very explicit about the necessity of the design inference eliminating all chance hypotheses. But this is a fatal flaw: except in very unusual cases, it is impossible to identify all possible chance hypotheses simply because finite human beings are unable to identify every chance scenario that might be operative.

Let’s consider a more complex example that illustrates this limitation. The Sierpinski triangle (Figure 2) provides a clear example of how specified, complex structures can arise by chance in unexpected ways. Construction of the figure starts with an equilateral triangle. The midpoints of each of the three sides are then connected, yielding a smaller equilateral triangle in the middle of the original triangle. This triangle is removed leaving three equilateral triangles, each having sides one half that of the original triangle. The process of identifying midpoints of sides, connecting them and removing the middle triangle is then repeated for each of the three new triangles. The process is repeated *ad infinitum* for all triangles produced.

The result is called a *fractal*, and it’s a highly complex and specified figure. It has some surprising properties. Note that removal of the first triangle removed ¼ of the area. At the next step, ¼ of the remaining ¾ is removed. After an infinite series of steps, the total area removed (¼ + ¼(¾) + ¼(¾)2+ …) is 1. That is, the Sierpinski triangle has no area! Another amazing property is that it can be produced by a random process. The process is called the *chaos game* and goes like this: Pick any point in the interior of the original equilateral triangle. Then randomly pick any of the three corners. Join the point to the corner selected, mark its midpoint, and use it as a new starting point. Again randomly pick any corner and repeat the same process. Keep on doing this. After continuing for a while, one can see that the marked points soon fall into the pattern of the Sierpinski triangle, albeit with a few stray marked points. The successive corners *have to be picked by chance* – a systematic process like going from corner to corner around the original triangle won’t work.

**Figure 2**. The Sierpenski Triangle is highly ordered but can be generated through a random process

Suppose someone did not know about the chaos game. Since the area of the original triangle is zero, if a point in it were picked at random, it would have zero probability of being in the Sierpinski triangle. Similarly, any random sequence of points in the original triangle would seem to have zero chance of following the pattern of the Sierpinski triangle. Thus it would appear to that person that it satisfies the sp/SP criterion – specification and small probability. But in fact, the triangle can be generated by chance. Given just the specification of the Sierpinski triangle, investigators could easily eliminate the chance explanation and conclude that it exhibits design because the chance process that generates it did not occur to them. That is, because it is impossible in practice to identify all chance hypotheses, one can never eliminate the possibility that very sophisticated structures like the Sierpinski triangle could arise by an undiscovered chance hypothesis.

So the explanatory filter fails because it is normally impossible to eliminate all chance hypotheses. But its logic is also flawed. It depends on a strict trichotomy – it assumes that events are of one of three mutually exclusive types – regularity, chance, or design. However, Dembski is vague about his definitions of regularity and chance. He writes, “To attribute an event to regularity is to say that the event will (almost) always happen.” (p. 36) At one point, he identifies regularities with the outcomes of natural laws. (p. 53) As for chance, he adds, “Events of intermediate probability, or what I am calling IP events, are the events we normally expect to occur by chance in the ordinary circumstances of life. Rolling snake-eyes with a pair of fair dice constitutes an IP event.” (p. 40) But he never gives a clear definition. He then defines design as the logical complement of regularity and chance. Because regularity and chance are vague notions in *The Design Inference*, so is design.

For instance, consider some event that is the product of a previously unknown natural law. The explanatory filter will not identify it as a regularity and, if it can be shown to be specified and of low probability, it will be identified as the product of design. But at some future time, with more knowledge, it would be identified as regularity. So in such a case, the distinction between regularity and design depends not on the event but on the current state of the human understanding of nature.

Also, suppose an intelligent agent designed a natural process that incorporated chance. Human beings do this frequently – for example, football games begin with a coin flip, children’s games often incorporate chance elements such as dice or spinners, and statisticians use random sampling in conducting research. In these cases, chance provides for fairness – each team in the football has an equal chance of choosing whether or not to get the ball first, adults or older children cannot use their superior knowledge to advantage over youngsters, and every member of a target population has an equal chance of being included in the sample. So it is not unreasonable to think that an intelligent agent could design processes that include chance for various purposes.

Dembski, however, has recently denied this latter possibility. In a recent book, *God, Chance and Purpose: Can God Have It Both Ways?*, David Bartholomew argues that God uses chance. In a review of Bartholomew’s book, Dembski rejects this assertion on the grounds that the clear, historical teaching of Christianity is that God knows the future so God can predict the outcomes of all events even if human beings cannot. Because God can predict them, he argues, they exhibit what appears to us what seems to be chance, but it is not chance from God’s perspective. He goes on to assert that, “strict uncertainty about the future means that God cannot guarantee his promises because the autonomy of the world can then always overrule God. “ He also writes that “…God has given creation a determinate character…” Giving creation a determinate character is a theological position that many thinkers have taken, but it’s controversial. Most importantly, however, even though Dembski’s review of Bartholomew’s book was written a decade after publication of *The Design Inference*, it reveals an important feature of Dembski’s thinking – the strict separation of chance and design is not based on science but on theological presuppositions.

The logic of the design inference is to examine three mutually exclusive bins, let’s call them A, B, and C. If an event does not fit in A, we test to see if it fits in B. If not, we test to see if it fits in C. If not, we put it in B by default. The process requires that A, B, and C be unambiguously distinguishable. But as we have seen, the explanatory filter cannot distinguish between A (regularity) and C (design) in the case of currently unknown natural laws. Furthermore, the assertion that B (chance) and C (design) are clearly distinguishable depends on theological assumptions, not on science. So the filter is not a reliable way for scientists to identify the presence of design.

In summary, then, Dembski has not achieved his ambitious goal of providing a scientific means of detecting intelligent design. The outputs of the explanatory filter can depend on human knowledge rather than on the phenomena being studied in two ways – they can mistakenly infer design when the phenomena are actually the product of currently unknown natural laws or when they are the product of unanticipated chance hypotheses. Also, the distinction between design and chance depends on theological assumptions. On both grounds, then, the design inference cannot be added to the methodology scientists can use to account for natural phenomena.

But there is another approach to understanding how design can be inferred. In a book of essays titled *Creative Tension: Essays on Science and Religion*, Michael Heller argues that any discussion of science and religion necessarily involves an intrinsic God-of-the-gaps argument, that is, an assertion of what science cannot do but religion can do. The key to progress in such discussions is to distinguish the essential and inessential gaps – those issues that are truly outside the scope of science but on which religion can make a legitimate contribution. Heller argues that there are only three essential gaps and they are expressed in three questions: Why is there anything? Given that things exist, why do they have such an orderly structure? How do we account for ethics and values? From this perspective, Intelligent Design is seeking to close an essential gap by trying to provide a scientific answer to the second question, something that cannot be done. From Heller’s point of view, the design inference infers too little design – all natural laws, all chance processes, and all instances of specified complexity are the results of design. But that inference is based on faith – it is not, nor can it be, a scientific inference.

**James Bradley is a Professor of Mathematics emeritus at Calvin College in Grand Rapids, Michigan, USA. He received his bachelor of science in mathematics from MIT and his doctorate in mathematics from the University of Rochester. His mathematical specialty has been game theory and operations research. In recent years, he has pursued an interest in mathematics and theology. He coedited Mathematics in a Postmodern Age: a Christian Perspective and the mathematics volume in Harper One’s Through the Eyes of Faith series. He also edits the Journal of the Association of Christians in the Mathematical Sciences.**

1December 29

^{th}2010This is one of the most fair explanations of Dembski I’ve seen. Very gracious.

I agree with the author: both chance and natural regularity can be the product of a designer.

-Wm

December 29

^{th}2010We know that time is variable: its length depends on your position in space and how fast you are going. To fully understand God’s relationship to time, we would have to know where He is and how fast He is going—neither of which we could possible ever know. It seems to me that the only correct answer to anything related to the foreknowledge of God is “we don’t know”. So in the above sentences “God knows the future so God can predict the outcomes of all events ... but it is not chance from God’s perspective,” there appears to be an implication that the author knows that God is “in time” much like we are and that it is somewhat deterministic. I am much more comfortable standing in awe of a God who might be able to see all possible outcomes based on the free-will choices of our hearts and the random chaotic events which connect them.

December 29

^{th}2010Dear Dr. Bradley:

No one denies that elaborate geometrical figures can be generated by chance (or chance plus natural laws). The geometrical patterns in crystals, stalactites, etc. are obviously not in any direct way intelligently designed, but arise due to contingent events operating in a context of laws of chemistry and physics.

However, all such patterned results are static and homogeneous. The integrated complex systems of life, by contrast, are dynamic and consist of heterogeneous *moving parts* which interact with each other. They have complex *feedback systems* to adjust their operation, repair themselves, etc. There is nothing like this in the triangles above, or in crystalline structures in minerals, etc. Thus, all arguments from merely geometrical regularities in nature are inappropriate. What needs to be explained is why the various systems within cells and organisms are so integrated and self-adjusting, with levels of complexity far beyond those not only of any machine but of any factory that human beings have ever produced. What needs to be explained is how all of this could arise without design. Note that I did not say “miracles” or “interventions”; I said “design.”

December 30

^{th}2010I think, Rich, that you are not understanding the purpose of the Sierpinski Triangle example. It is not an example of how life is or a proof that it is not designed - it is an example that shows that Dembski’s model is fundamentally flawed. Your questions in your second paragraph are valid but they are tangential to the point Dr Bradley is making.

December 30

^{th}2010I fully agree with you, Rich. I affirm design. My point about The Design Inference is that we don’t have to presume special divine intervention whenever we see specified complexity. God uses secondary agents. Thus an explanation in terms of natural laws and an explanation in terms of divine activity can be completely compatible if we see God as the origin of those natural laws.

December 30

^{th}2010It’s also worth noting that natural selection is an intelligence by Dembski’s own definition, and therefore can be responsible for “specified complexity”:

“...by intelligence I mean the power and facility to choose between options–this coincides with the Latin etymology of “intelligence,” namely, “to choose between””

As one might predict, a Christian neuroscientist who very politely pointed that out on Dembski’s blog was quickly banned. http://www.uncommondescent.com/intelligent-design/id-in-the-uk/

and http://www.uncommondescent.com/intelligent-design/id-in-the-uk/#comment-84470 Als.o see http://pandasthumb.org/archives/2007/01/dissent-out-of.html

December 31

^{st}2010@Jim

//My point about The Design Inference is that we don’t have to presume special divine intervention whenever we see specified complexity//

Jim, I don’t see how your example qualifies as an example of specified complexity, it’s a highly ordered structure.

And laws are descriptions of phenomena, how do we get from descriptions of regularities in nature to a cause of phenomena?

The reductionist skeptic, philosopher of science Michael Polanyi writes:

“A machine, for example, cannot be explained in terms of physics and chemistry. Machines can go wrong and break down—something that does not happen to laws of physics and chemistry.

In fact, a machine can be smashed and the laws of physics and chemistry will go on operating unfailingly in the parts remaining after the machine ceases to exist. Engineering principles create the structure of the machine which harnesses the laws of physics and chemistry for the purposes the machine is designed to serve. Physics and chemistry cannot reveal the practical principles of design or co-ordination which are the structure of the machine.”

December 31

^{st}2010What a surprise.

January 1

^{st}2011Jim Bradley:

“I fully agree with you, Rich. I affirm design. My point about The Design Inference is that we don’t have to presume special divine intervention whenever we see specified complexity. God uses secondary agents.”

Could you specify, where Dembski speaks about “specal divine intervention” in his Design Inference?

As far as I know, his claim is that specified complexity is marker of design by an intelligent agent. Demsbki DOESNT claim *if specified complexity -> there must have been special divine intervention*

(I see problems in calculating propabilities: they are too dependent about backround information, and not objective things. And that’s why I think that Dembski’s DI has not much use.)

January 1

^{st}2011Bill Dembski abandoned his “explanatory” filter a while ago. Though he later backtracked, the filter does not seem to get much of an airing these days.

OT, Happy New Year to everyone!

January 1

^{st}2011Here is where Dembski “reinstates” his filter, stating his reason as:

I was thinking of just sticking with SC [specified complexity] in the future, but with critics crowing about the demise of the EF, I’ll make sure it stays in circulation.I am unaware of any subsequent demonstration by Dembski or anyone else of how the EF might be usefully employed. The ARN discussion board is practically moribund now but is still number one under “Intelligent Design Links” at UD 5Dembski’s blog) though new registrations are blocked. I posted a question there back in 2005

Can anyone point me to an example of the successful application of the EF to a biological system?and nothing remotely convincing has emerged so far.

January 2

^{nd}2011Joe Felsenstein:

Dembski argues that there are theorems that prevent natural selection from explaining the adaptations that we see. His arguments do not work. There can be no theorem saying that adaptive information is conserved and cannot be increased by natural selection. Gene frequency changes caused by natural selection can be shown to generate specified information. The No Free Lunch theorem is mathematically correct, but it is inapplicable to real biology. Specified information, including complex specified information, can be generated by natural selection without needing to be “smuggled in”. When we see adaptation, we are not looking at positive evidence of billions and trillions of interventions by a designer. Dembski has not refuted natural selection as an explanation for adaptation.link

1