Why Dembski’s Design Inference Doesn’t Work. Part 1

| By (guest author)

This is the first in a two part series taken from a scholarly essay by James Bradley, which can be found here.

Why Dembski’s Design Inference Doesn’t Work. Part 1

The theory of Intelligent Design has gained a great deal of credibility in the evangelical world since it first became widely known in the early 1990s; much of that credibility has been based on the belief that a solid theoretical foundation has been laid for Intelligent Design in William Dembski’s 1998 book, The Design Inference. This article challenges that belief by questioning some of Dembski’s assumptions, pointing out some limitations of his analysis, and arguing that a design inference is necessarily a faith-based rather than a scientific inference.

The Design Inference begins with two broadly accessible chapters that introduce the main ideas behind Dembski’s method for inferring design. These are followed by four technical chapters that provide a rigorous mathematical foundation for the method. The book is not a contribution to either mathematics or science per se; rather it is an attempt to extend scientific methodology in a new direction.

The basic task of science is to account for natural phenomena. “Account for” typically means to provide causative explanations for phenomena and to enable prediction of other phenomena. Typically, causative explanations are mechanisms – for example, today we generally accept the sun’s gravity as a causative explanation for the phenomenon of planets in our solar system staying in their orbits. Dembski’s ambitious goal is to increase the repertoire of available causative explanations by providing a rigorous method acceptable to scientists that would allow intelligent design to be recognized as a legitimate cause.

The method is a variation on the standard method of statistical inference, which can be explained by way of an example. The Salk and Sabin polio vaccines, developed in the 1950s, had to be tested on a large number of people to determine their efficacy. 400,000 children in grades one through three took part. Participating communities were divided into test and control populations, and the testing was “double blind.” That is, some communities received the vaccine while others received a placebo, and which group a community belonged to was hidden from both the doctors as well as from the families of the children who participated. This ensured that doctors were not influenced in their diagnoses by knowledge about the vaccine.

The method proceeded by tentatively assuming that the vaccine was ineffective; that is, children in each community were assumed to be equally likely to contract polio after receiving their injection. Of course, this does not mean that exactly the same numbers of children would be infected in each community, but if the number of children who contracted polio in the vaccine group was significantly below those who contracted polio in the control group, the researchers would conclude that the vaccine was effective.

“Significantly” means that the difference was large enough to be very unlikely by chance. When the testing period was complete, the rate of polio incidence children was 28 per 100,000 in the vaccinated communities and 71 per 100,000 children in the control communities. Statisticians were able to calculate that the probability of this great a difference happening by chance was less than one in a million. Thus they concluded that the vaccine had been effective. Due to the Salk and later Sabin vaccines, polio has today been eliminated in most of the world.

Note how the procedure goes: The researchers first designed an experiment (in this case testing 400,000 children divided into vaccine and control communities). They then identified a pattern that would demonstrate that a factor other than chance was operative (in this case the pattern was that the number of children contracting polio in vaccine communities would be significantly less than in the control communities.) Then they did the experiment. Because the difference was extremely unlikely to have occurred by chance, they inferred that the vaccine was effective.

The polio researchers first designed the experiment, and then looked for the pattern after doing the experiment. The researchers thus could not have rigged the results because the pattern they sought was clearly describedbefore doing the experiment. Dembski’s design inference, however, looks backward in time – it takes already existing patterns and seeks an explanation other than chance, namely design.

The way Dembski addresses this problem is by requiring that patterns be specified, that is, describable in a way that is independent of the process of the observations. For example, suppose someone shows you a target attached to a tree with an arrow in the bulls’ eye. If the arrow was shot first and the target placed around it, the archer’s claim of being a good shot would be invalid. But if the target were posted first and then he hit the bulls’ eye, he would be able to claim excellent marksmanship. In the second case, the specification (the target) was independent of the shot; in the first case it was not.

So to identify the presence of design, Dembski replaces the prior description of a pattern that statistical inference uses with aspecified description. He still uses the presence of small probabilities in the same way. (He includes quite a lengthy discussion of “how small is small.” While interesting, it doesn’t bear on our discussion here.) He summarizes his design inference with the following “explanatory filter”:

Figure 1. Dembski’s explanatory filter. HP means “high probability.” This diamond selects phenomena that can be accounted for by “laws of nature.” IP means “intermediate probability.” These are events like flipping five heads in a row with a coin: they do not involve laws of nature, they do involve chance, but the likelihood is not as small as Dembski wants to require in order to identify design. SP means “small probability” and sp means “specified” as discussed above.

So let’s suppose we have an event and want to test it for design. We first see whether it is the result of a law of nature. If not, we test it to see if it involves an intermediate level probability; if so we attribute the event to chance. Lastly if it is of very small probability and is specified, the explanatory filter attributes it to design. If it reaches the sp/SP diamond but fails either the small probability or specification test, the filter follows the same convention as statistics and attributes it to chance due to lack of sufficient evidence to say otherwise.

The explanatory filter seems straightforward but it includes two fatal flaws, one involving the small probability requirement and the other involving the three way classification – regularity, chance, and design. We’ll examine these flaws next time.


Notes

Citations

MLA

Bradley, James. "Why Dembski’s Design Inference Doesn’t Work. Part 1"
https://biologos.org/. N.p., 27 Dec. 2010. Web. 21 October 2018.

APA

Bradley, J. (2010, December 27). Why Dembski’s Design Inference Doesn’t Work. Part 1
Retrieved October 21, 2018, from /blogs/archive/why-dembskis-design-inference-doesnt-work-part-1

About the Author

James Bradley

James Bradley is a Professor of Mathematics emeritus at Calvin College in Grand Rapids, Michigan, USA. He received his bachelor of science in mathematics from MIT and his doctorate in mathematics from the University of Rochester. His mathematical specialty has been game theory and operations research. In recent years, he has pursued an interest in mathematics and theology. He coedited Mathematics in a Postmodern Age: a Christian Perspective and themathematics volume in Harper One’s Through the Eyes of Faith series. He also edits the Journal of the Association of Christians in the Mathematical Sciences.

More posts by James Bradley

Comments