Why Dembski’s Design Inference Doesn’t Work. Part 1

Bookmark and Share

December 27, 2010 Tags: Design

Today's entry was written by James Bradley. Please note the views expressed here are those of the author, not necessarily of The BioLogos Foundation. You can read more about what we believe here.

Why Dembski’s Design Inference Doesn’t Work. Part 1

This is the first in a two part series taken from a scholarly essay by James Bradley, which can be found here.

The theory of Intelligent Design has gained a great deal of credibility in the evangelical world since it first became widely known in the early 1990s; much of that credibility has been based on the belief that a solid theoretical foundation has been laid for Intelligent Design in William Dembski’s 1998 book, The Design Inference. This article challenges that belief by questioning some of Dembski’s assumptions, pointing out some limitations of his analysis, and arguing that a design inference is necessarily a faith-based rather than a scientific inference.

The Design Inference begins with two broadly accessible chapters that introduce the main ideas behind Dembski’s method for inferring design. These are followed by four technical chapters that provide a rigorous mathematical foundation for the method. The book is not a contribution to either mathematics or science per se; rather it is an attempt to extend scientific methodology in a new direction.

The basic task of science is to account for natural phenomena. “Account for” typically means to provide causative explanations for phenomena and to enable prediction of other phenomena. Typically, causative explanations are mechanisms – for example, today we generally accept the sun’s gravity as a causative explanation for the phenomenon of planets in our solar system staying in their orbits. Dembski’s ambitious goal is to increase the repertoire of available causative explanations by providing a rigorous method acceptable to scientists that would allow intelligent design to be recognized as a legitimate cause.

The method is a variation on the standard method of statistical inference, which can be explained by way of an example. The Salk and Sabin polio vaccines, developed in the 1950s, had to be tested on a large number of people to determine their efficacy. 400,000 children in grades one through three took part. Participating communities were divided into test and control populations, and the testing was “double blind.” That is, some communities received the vaccine while others received a placebo, and which group a community belonged to was hidden from both the doctors as well as from the families of the children who participated. This ensured that doctors were not influenced in their diagnoses by knowledge about the vaccine.

The method proceeded by tentatively assuming that the vaccine was ineffective; that is, children in each community were assumed to be equally likely to contract polio after receiving their injection. Of course, this does not mean that exactly the same numbers of children would be infected in each community, but if the number of children who contracted polio in the vaccine group was significantly below those who contracted polio in the control group, the researchers would conclude that the vaccine was effective.

“Significantly” means that the difference was large enough to be very unlikely by chance. When the testing period was complete, the rate of polio incidence children was 28 per 100,000 in the vaccinated communities and 71 per 100,000 children in the control communities. Statisticians were able to calculate that the probability of this great a difference happening by chance was less than one in a million. Thus they concluded that the vaccine had been effective. Due to the Salk and later Sabin vaccines, polio has today been eliminated in most of the world.

Note how the procedure goes: The researchers first designed an experiment (in this case testing 400,000 children divided into vaccine and control communities). They then identified a pattern that would demonstrate that a factor other than chance was operative (in this case the pattern was that the number of children contracting polio in vaccine communities would be significantly less than in the control communities.) Then they did the experiment. Because the difference was extremely unlikely to have occurred by chance, they inferred that the vaccine was effective.

The polio researchers first designed the experiment, and then looked for the pattern after doing the experiment. The researchers thus could not have rigged the results because the pattern they sought was clearly described before doing the experiment. Dembski’s design inference, however, looks backward in time – it takes already existing patterns and seeks an explanation other than chance, namely design.

The way Dembski addresses this problem is by requiring that patterns be specified, that is, describable in a way that is independent of the process of the observations. For example, suppose someone shows you a target attached to a tree with an arrow in the bulls’ eye. If the arrow was shot first and the target placed around it, the archer’s claim of being a good shot would be invalid. But if the target were posted first and then he hit the bulls’ eye, he would be able to claim excellent marksmanship. In the second case, the specification (the target) was independent of the shot; in the first case it was not.

So to identify the presence of design, Dembski replaces the prior description of a pattern that statistical inference uses with a specified description. He still uses the presence of small probabilities in the same way. (He includes quite a lengthy discussion of “how small is small.” While interesting, it doesn’t bear on our discussion here.) He summarizes his design inference with the following “explanatory filter”:

Figure 1. Dembski’s explanatory filter. HP means “high probability.” This diamond selects phenomena that can be accounted for by “laws of nature.” IP means “intermediate probability.” These are events like flipping five heads in a row with a coin: they do not involve laws of nature, they do involve chance, but the likelihood is not as small as Dembski wants to require in order to identify design. SP means “small probability” and sp means “specified” as discussed above.

So let’s suppose we have an event and want to test it for design. We first see whether it is the result of a law of nature. If not, we test it to see if it involves an intermediate level probability; if so we attribute the event to chance. Lastly if it is of very small probability and is specified, the explanatory filter attributes it to design. If it reaches the sp/SP diamond but fails either the small probability or specification test, the filter follows the same convention as statistics and attributes it to chance due to lack of sufficient evidence to say otherwise.

The explanatory filter seems straightforward but it includes two fatal flaws, one involving the small probability requirement and the other involving the three way classification – regularity, chance, and design. We’ll examine these flaws next time.


James Bradley is a Professor of Mathematics emeritus at Calvin College in Grand Rapids, Michigan, USA. He received his bachelor of science in mathematics from MIT and his doctorate in mathematics from the University of Rochester. His mathematical specialty has been game theory and operations research. In recent years, he has pursued an interest in mathematics and theology. He coedited Mathematics in a Postmodern Age: a Christian Perspective and the mathematics volume in Harper One’s Through the Eyes of Faith series. He also edits the Journal of the Association of Christians in the Mathematical Sciences.

Next post in series >


View the archived discussion of this post

This article is now closed for new comments. The archived comments are shown below.

Loading...
Page 1 of 1   1
sy - #45106

December 27th 2010

I cheated. I went to the link and read the whole essay. Thank you for a brilliant and extremely useful discussion of what might be the actual heart of the issue related to chance, scientific proof and faith. You have given us a great deal to think about here.


Glen Davidson - #45112

December 27th 2010

It’s really rather obvious that Dembski’s whole point with the “design inference” was never to detect design at all (archaeologists and forensics specialists never had much difficulty turning up positive evidence for design), rather it was to obfuscate the line that we recognize between designed objects and life.  Indeed, Dembski does not want anything to change in archaeology or in forensics, so that we start inferring that some sort of super human civilization (Atlantis?) or alien race made the traces of life found throughout the strata, rather they are expected to determine matters as usual, that life is not one of the things that they should identify as being designed (not by identified agents, at least).

Life differs from designed objects in a great many ways, including its reproduction and its evolutionary history.  Also in not exhibiting the rationality and rampant inclusion of novelty found in actual designs.  Design never explained anything in life, including before Darwin wrote his book.  Dembski’s “design inference” is designed to ignore all of these facts, and thus to confuse categorically different things, physis and techne.

Glen Davidson


DBB - #45211

December 28th 2010

Dr. Bradley doesn’t agree with Dembski’s basis for determing design, so I want to know what is Dr. Bradley’s basis for determining which objects are designed.  If he doesn’t agree with Dembski’s mathematics, does he have his own mathematical model,  or does he use a non-mathematical model?  If we are going to any reasonable discussion of design versus non-design each side needs to put forth its model.


Stuart - #45213

December 28th 2010

I read the linked essay, and wish to remark on that. I wanted to discuss your Sierpinski triangle example.

This seems at first like a very nice counterexample, but hold on. The Sierpinski triangle is a completely mathematical object. Dembski’s explanatory filter obviously was never intended to discuss whether a piece of abstract math was designed or not (what would that even mean?)—rather, things in the real world. Let’s try to imagine various ways we might encounter the ST in the real world.

We might find it on a piece of paper, in incredible detail. Here we would surely conclude, notwithstanding this algorithm, that it was designed, for there is seemingly no way to implement the algorithm randomly on a piece of paper. We might get a radio sequence giving pairs of coordinates of points of the triangle, in the order dictated (say) by the chaos game. Again, we would (obviously I think) conclude design. Of course, we might also find a tree or living system organized like a Sierpinski triangle. Here we might very well believe that the chaos game was responsible: many living things in nature ARE fractals, and I’m unaware that Dembski et al. have ever suggested that these were therefore clearly designed.


Stuart - #45214

December 28th 2010

I think, therefore, that the Sierpinski triangle example fails. Knowledge of such things as the chaos game will certainly increase the scope of things we consider as possible chance explanations, but we have to be sensitive as well to the nature of the actual phenomenon whose designedness is in question in the (physical) world. I think my examples show that the ST does not really pose a threat to the filter.

A last important point, related to the broader point you were making. It’s true we can never enumerate every possible chance process. Maybe I’m wrong, and some phenomenon (not the chaos game alone, for sure) could even explain a picture of the ST on paper by chance, and we just haven’t thought of it yet. I don’t view this “something-might-come-up” approach as a useful tool in deciding whether to make a particular design inference, however. Science can always be wrong because of something we haven’t yet considered, sure; but we should always try to come to the best inferences based on current data. It may turn out one day that Egyptian hieroglyphics or Indian arrowheads were actually generated by a process we “just haven’t thought of”; meanwhile, it’s not useful to suspend judgment about whether they were designed.


Tim Sverduk - #45230

December 29th 2010

It looks to me like Dr Bradley agrees with Dr Dembski’s Explanatory filter, but with one caveat:  At the point of SP/sp—> yes—> (Design), Dr Bradley would change the filter to:  SP/sp—> yes—>  (Design OR Unknown-Natural-Law.)  In real life, however, humans routinely (and correctly) dismiss the possiblility of Unknown-Natural-Law if the SP/sp is too high.


Joe Felsenstein - #45270

December 29th 2010

If the design has to be detected just from the form, Bradley is correct.  But if we have an external criterion (“specification”), it is easy to detect.  For example, if the criterion is fitness, we can see that a pure random process such as mutation could not have produced an organism with fitness that high.  Fish gotta swam, and birds gotta fly, and they do it incredibly better than a random pile of molecules from the primordial organic soup would.

That much detection is in fact easy, once we make fitness our criterion.  The question is then: exactly what have we detected?  Design?  Or could it be natural selection?

Dembski has another argument, his Law of Conservation of Complex Specified Information (LCCSI) which he believes rules out any ability of natural selection to improve fitness by nearly as much as we see.  He is wrong—the LCCSI does not do that job.  For details see my 2007 article in Reports of the National Center for Science Education, which I believe is highly relevant to these issues.  It will be found online here.  So far Dembski has not even attempted to rebut it.


Ben Griffith - #45278

December 29th 2010

Great Review!

I just finished reading Draper’s chapter in the Oxford Handbook of P of R having to do with science. I really enjoyed the contribution you’ve made to a critique of Dembski.


hgp - #45279

December 29th 2010

In addition to Stuart’s comments I want to point out another flaw in the Sierpinski Triangle example:

Mr. Bradley seems to think that the “chaos game” he describes in his essay is not a specified process but chance despite the fact, that he gives an (algorithmic) specification of it in his text. The specification is not undone by the fact, that it incorporates a chance element, the specification is there for everyone to see.

The triangle is not generated by chance when the process is the “chaos game”, it is generated by a specified(!) algorithm. So the example fails on this level also. Dembski’s filter works just fine:
no natural laws produce the ST
the probability of the pattern is very low and it follows a specification
So it is designed by a (man-made) algorithm

Dembski’s filter is not able to show us, which algorithm exactly produced any given ST, but this is not the aim of it.

(sorry for my foreigner’s basic spelling/grammar mistakes)


Page 1 of 1   1