Why Dembski’s Design Inference Doesn’t Work. Part 2

| By (guest author)

This is the second in a two part series taken from a scholarly essay by James Bradley, which can be found here. In his last post, Dr. Bradley explained how Dembski’s “explanatory filter” for detecting design works. The model includes what he considers to be two fatal flaws, one involving the small probability requirement and the other involving the three way classification – regularity, chance, and design.

We at BioLogos think this essay is of great significance in potentially dispelling one of the important prongs of the Intelligent Design movement as led by fellows of the Discovery Institute. Indeed, we think it is so important that we asked Bill Dembski if he would be willing to respond on our site. We've not yet heard back from him, but we are anxious to learn whether he is willing to concede the points being made here by Dr. Bradley. The hallmark of good science is being able to concede when a particular model doesn't work anymore and then move on. We think Bill's Design Inference is a model in crisis. Will he be able to save it, or will he concede that it is now time take this particular mathematical approach out of the ID movement? We are anxious to know his thoughts.

In my previous post, I summarized Dembski’s explanatory filter (see figure), I suggested that it suffers from two fatal flaws: (1.) The small probability requirement and (2.) The three way classification – regularity, chance, and design. Today, I elaborate on those flaws.

First, let’s consider the small probability requirement. At one point, Dembski writes, “… a successful design inference sweeps the field clear of chance hypotheses. The design inference, in inferring design, eliminates chance entirely…” (p. 7) To understand the term “chance hypotheses,” we begin with an example. Suppose we are considering a situation in which the outcome is not predetermined but only two possible outcomes can occur – a 0 or a 1. The event we want to consider is that the situation yields a sequence of ten 1’s a row – 1111111111. Did they occur by chance or did someone arrange for only 1’s to occur – that is, did the ten 1’s occur by design? Suppose for the moment that our criterion for “small probability” is less than one chance in a thousand; Dembski uses numbers much smaller than this, but for now this will illustrate our example well.

Consider two cases. The first is the flip of a fair coin. 1 represents heads, 0 represents tails. Each is equally likely, so each has a probability equal to 1/2. The probability of ten heads in a row is (1/2)10= 1/1024 , which is less than one in a thousand, so our chance hypothesis (each outcome has probability 1/2) suggests we should attribute the sequence of ten 1’s to design.

But now consider a second case, dropping a thumbtack on the floor. Let’s use 0 to denote an instance in which the tack lands point up and 1 the instance where it lands point down. Suppose the particular thumbtacks we are dealing with land point down 2/3 of the time. Then the probability of ten 1’s in a row is .0173 (or about 17 in 1000), considerably more likely than the coin example. In this case, the filter suggests we should attribute the ten successive 1’s to chance.

The point is, different chance hypotheses give different results. Dembski writes, “…opposing chance to design requires that we be clear what chance processes could be operating to produce the event in question.” (p. 46) Dembski is very explicit about the necessity of the design inference eliminating all chance hypotheses. But this is a fatal flaw: except in very unusual cases, it is impossible to identify all possible chance hypotheses simply because finite human beings are unable to identify every chance scenario that might be operative.

Let’s consider a more complex example that illustrates this limitation. The Sierpinski triangle (Figure 2) provides a clear example of how specified, complex structures can arise by chance in unexpected ways. Construction of the figure starts with an equilateral triangle. The midpoints of each of the three sides are then connected, yielding a smaller equilateral triangle in the middle of the original triangle. This triangle is removed leaving three equilateral triangles, each having sides one half that of the original triangle. The process of identifying midpoints of sides, connecting them and removing the middle triangle is then repeated for each of the three new triangles. The process is repeated ad infinitum for all triangles produced.

The result is called a fractal, and it’s a highly complex and specified figure. It has some surprising properties. Note that removal of the first triangle removed ¼ of the area. At the next step, ¼ of the remaining ¾ is removed. After an infinite series of steps, the total area removed (¼ + ¼(¾) + ¼(¾)2+ …) is 1. That is, the Sierpinski triangle has no area! Another amazing property is that it can be produced by a random process. The process is called the chaos game and goes like this: Pick any point in the interior of the original equilateral triangle. Then randomly pick any of the three corners. Join the point to the corner selected, mark its midpoint, and use it as a new starting point. Again randomly pick any corner and repeat the same process. Keep on doing this. After continuing for a while, one can see that the marked points soon fall into the pattern of the Sierpinski triangle, albeit with a few stray marked points. The successive corners have to be picked by chance – a systematic process like going from corner to corner around the original triangle won’t work.

Figure 2. The Sierpenski Triangle is highly ordered but can be generated through a random process

Suppose someone did not know about the chaos game. Since the area of the original triangle is zero, if a point in it were picked at random, it would have zero probability of being in the Sierpinski triangle. Similarly, any random sequence of points in the original triangle would seem to have zero chance of following the pattern of the Sierpinski triangle. Thus it would appear to that person that it satisfies the sp/SP criterion – specification and small probability. But in fact, the triangle can be generated by chance. Given just the specification of the Sierpinski triangle, investigators could easily eliminate the chance explanation and conclude that it exhibits design because the chance process that generates it did not occur to them. That is, because it is impossible in practice to identify all chance hypotheses, one can never eliminate the possibility that very sophisticated structures like the Sierpinski triangle could arise by an undiscovered chance hypothesis.

So the explanatory filter fails because it is normally impossible to eliminate all chance hypotheses. But its logic is also flawed. It depends on a strict trichotomy – it assumes that events are of one of three mutually exclusive types – regularity, chance, or design. However, Dembski is vague about his definitions of regularity and chance. He writes, “To attribute an event to regularity is to say that the event will (almost) always happen.” (p. 36) At one point, he identifies regularities with the outcomes of natural laws. (p. 53) As for chance, he adds, “Events of intermediate probability, or what I am calling IP events, are the events we normally expect to occur by chance in the ordinary circumstances of life. Rolling snake-eyes with a pair of fair dice constitutes an IP event.” (p. 40) But he never gives a clear definition. He then defines design as the logical complement of regularity and chance. Because regularity and chance are vague notions in The Design Inference, so is design.

For instance, consider some event that is the product of a previously unknown natural law. The explanatory filter will not identify it as a regularity and, if it can be shown to be specified and of low probability, it will be identified as the product of design. But at some future time, with more knowledge, it would be identified as regularity. So in such a case, the distinction between regularity and design depends not on the event but on the current state of the human understanding of nature.

Also, suppose an intelligent agent designed a natural process that incorporated chance. Human beings do this frequently – for example, football games begin with a coin flip, children’s games often incorporate chance elements such as dice or spinners, and statisticians use random sampling in conducting research. In these cases, chance provides for fairness – each team in the football has an equal chance of choosing whether or not to get the ball first, adults or older children cannot use their superior knowledge to advantage over youngsters, and every member of a target population has an equal chance of being included in the sample. So it is not unreasonable to think that an intelligent agent could design processes that include chance for various purposes.

Dembski, however, has recently denied this latter possibility. In a recent book, God, Chance and Purpose: Can God Have It Both Ways?, David Bartholomew argues that God uses chance. In a review of Bartholomew’s book, Dembski rejects this assertion on the grounds that the clear, historical teaching of Christianity is that God knows the future so God can predict the outcomes of all events even if human beings cannot. Because God can predict them, he argues, they exhibit what appears to us what seems to be chance, but it is not chance from God’s perspective. He goes on to assert that, “strict uncertainty about the future means that God cannot guarantee his promises because the autonomy of the world can then always overrule God. “ He also writes that “…God has given creation a determinate character…” Giving creation a determinate character is a theological position that many thinkers have taken, but it’s controversial. Most importantly, however, even though Dembski’s review of Bartholomew’s book was written a decade after publication of The Design Inference, it reveals an important feature of Dembski’s thinking – the strict separation of chance and design is not based on science but on theological presuppositions.

The logic of the design inference is to examine three mutually exclusive bins, let’s call them A, B, and C. If an event does not fit in A, we test to see if it fits in B. If not, we test to see if it fits in C. If not, we put it in B by default. The process requires that A, B, and C be unambiguously distinguishable. But as we have seen, the explanatory filter cannot distinguish between A (regularity) and C (design) in the case of currently unknown natural laws. Furthermore, the assertion that B (chance) and C (design) are clearly distinguishable depends on theological assumptions, not on science. So the filter is not a reliable way for scientists to identify the presence of design.

In summary, then, Dembski has not achieved his ambitious goal of providing a scientific means of detecting intelligent design. The outputs of the explanatory filter can depend on human knowledge rather than on the phenomena being studied in two ways – they can mistakenly infer design when the phenomena are actually the product of currently unknown natural laws or when they are the product of unanticipated chance hypotheses. Also, the distinction between design and chance depends on theological assumptions. On both grounds, then, the design inference cannot be added to the methodology scientists can use to account for natural phenomena.

But there is another approach to understanding how design can be inferred. In a book of essays titled Creative Tension: Essays on Science and Religion, Michael Heller argues that any discussion of science and religion necessarily involves an intrinsic God-of-the-gaps argument, that is, an assertion of what science cannot do but religion can do. The key to progress in such discussions is to distinguish the essential and inessential gaps – those issues that are truly outside the scope of science but on which religion can make a legitimate contribution. Heller argues that there are only three essential gaps and they are expressed in three questions: Why is there anything? Given that things exist, why do they have such an orderly structure? How do we account for ethics and values? From this perspective, Intelligent Design is seeking to close an essential gap by trying to provide a scientific answer to the second question, something that cannot be done. From Heller’s point of view, the design inference infers too little design – all natural laws, all chance processes, and all instances of specified complexity are the results of design. But that inference is based on faith – it is not, nor can it be, a scientific inference.


Notes

Citations

MLA

Bradley, James. "Why Dembski’s Design Inference Doesn’t Work. Part 2"
https://biologos.org/. N.p., 29 Dec. 2010. Web. 17 November 2018.

APA

Bradley, J. (2010, December 29). Why Dembski’s Design Inference Doesn’t Work. Part 2
Retrieved November 17, 2018, from /blogs/archive/why-dembskis-design-inference-doesnt-work-part-2

About the Author

James Bradley

James Bradley is a Professor of Mathematics emeritus at Calvin College in Grand Rapids, Michigan, USA. He received his bachelor of science in mathematics from MIT and his doctorate in mathematics from the University of Rochester. His mathematical specialty has been game theory and operations research. In recent years, he has pursued an interest in mathematics and theology. He coedited Mathematics in a Postmodern Age: a Christian Perspective and themathematics volume in Harper One’s Through the Eyes of Faith series. He also edits the Journal of the Association of Christians in the Mathematical Sciences.

More posts by James Bradley

Comments