Notes on Intelligent Design
The Current Landscape
In December 2004, Antony Flew, a British philosopher, abandoned his lifelong atheism, citing evidence of intelligent design in DNA. In the same month, the ACLU sued a Pennsylvania school district for informing students about intelligent design. In February 2005, The Wall Street Journal reported that an evolutionary biologist at the Smithsonian Institution was punished for publishing an article supporting intelligent design. Since 2005, intelligent design has received significant media coverage. A major conference on intelligent design was held in Prague, signaling worldwide interest.
What is Intelligent Design?
Intelligent design is often portrayed as a faith-based alternative to evolution, a repackaged form of biblical creationism. However, the modern theory of intelligent design was proposed in the late 1970s and early 1980s by scientists like Charles Thaxton, Walter Bradley, and Roger Olson, who sought to explain the origin of the digital information encoded in DNA.
These scientists concluded that DNA's information-bearing properties suggest a prior designing intelligence. This proposition predates the 1987 Supreme Court decision against teaching creationism.
Intelligent Design vs. Creationism
Intelligent design differs from creationism in method and content. It is based on observations of nature and inferences from empirical evidence, not religious authority. The theory seeks causal explanations for observed complexity, such as the information in DNA, miniature circuits in cells, and the fine-tuning of physical constants. It doesn't challenge evolution as change over time or common ancestry but disputes the idea that biological change is wholly undirected.
A Brief History of the Design Argument
Advocates of intelligent design have revived the classical design argument. Before Darwin, many thinkers attributed life's origins to a purposeful designer. Figures like Plato, Cicero, Maimonides, and Thomas Aquinas made design arguments based on nature. The concept of design also influenced the scientific revolution (1500-1700).
Scientists like Johannes Kepler, John Ray, and Robert Boyle made design arguments based on empirical discoveries. Isaac Newton argued that the stability of the planetary system required precise initial positioning by an intelligent being. Newton questioned how animals' bodies could be contrived with such art and purpose without a skilled creator.
Skepticism about the design argument arose in the 18th century. David Hume argued that the design argument relies on a flawed analogy with human artifacts, highlighting differences between organisms and artifacts. Despite Hume's objections, thinkers like Thomas Reid, Thomas Paine, and Immanuel Kant continued to support versions of the design argument.
William Paley's Natural Theology cataloged biological systems suggesting a superintending intelligence. Paley argued that the complexity and adaptation in these systems could not arise from blind forces alone. He countered Hume's analogy critique by asserting that self-replicating watches would be even more marvelous, strengthening the design conclusion.
Darwin and the Eclipse of Design
Acceptance of the design argument waned with the rise of materialistic explanations in biology, particularly Darwin's theory of evolution by natural selection. Darwin contended that organisms only appeared designed. He proposed natural selection acting on random variations as a mechanism for adaptation without intelligent direction.
Darwin argued that natural forces could mimic a selecting intelligence, making design explanations unnecessary. Other naturalistic origin scenarios in astronomy, cosmology, and geology further diminished the design argument. A positivistic tradition in science also sought to exclude supernatural causes.
Natural theologians shifted to locating design in natural law rather than specific structures, weakening the empirical content of their arguments. Design became a matter of subjective belief, undetectable except through faith. While the design argument retreated, it didn't disappear entirely. Scientists like Louis Agassiz and Alfred Russel Wallace challenged Darwin, citing evidence of a guiding intelligence.
The mechanism of natural selection had a mixed reception initially because Darwin lacked a theory of heritable variation. The blending theory of inheritance and Mendelian genetics imposed limits on genetic variability. By the late 1930s and 1940s, natural selection was revived through the neo-Darwinian synthesis. This theory posited that natural selection acting on random mutations could explain novel biological forms. Small-scale changes could be extrapolated to large-scale development. Neo-Darwinists asserted that they had found a "designer substitute" to explain the appearance of design.
Problems with the Neo-Darwinian Synthesis
Since the late 1960s, the modern synthesis has faced challenges from paleontology, systematics, molecular biology, genetics, and developmental biology. Several books have questioned the creative power of the mutation/selection mechanism. A search for alternative naturalistic mechanisms has ensued without success. Doubts about the selection/mutation mechanism are so common that evolutionary theory spokesmen periodically reassure the public that the occurrence of evolution is not in doubt. Some scientists acknowledge a crisis in evolutionary theory.
Books advocating intelligent design as an alternative to neo-Darwinism began to appear. The scientific roots of intelligent design trace back to the molecular biological revolution. In 1953, Watson and Crick discovered that DNA stores information in a four-character digital code. Sequences of nucleotide bases store instructions for building proteins. Francis Crick's "sequence hypothesis" stated that chemical constituents in DNA function like letters in a language or symbols in computer code. The arrangement of these characters determines the sequence's function, giving DNA sequence specificity or specified complexity.
Richard Dawkins acknowledged that "the machine code of the genes is uncannily computer-like." Bill Gates noted that “DNA is like a computer program but far, far more advanced than any software ever created”. Further discoveries revealed that DNA and RNA are part of a complex information processing system that mirrors and exceeds our own nanotechnology.
Evidence that many scientists would see as pointing to design was being uncovered in molecular biology. Doubts about random mutations generating genetic information were expressed by mathematicians, engineers, and physicists.
The Wistar Institute Symposium
Mathematicians and biologists gathered informally in Geneva in the mid-1960s, where mathematicians questioned biologists' confidence in mutations assembling the genetic information for evolutionary innovation. This led to a conference at the Wistar Institute in Philadelphia in 1966, chaired by Sir Peter Medawar. The mathematicians argued that neo-Darwinism faced a combinatorial problem. The ratio of functional genes and proteins to possible sequences seemed too small for genetic information to originate by random mutation. A protein of 100 amino acids has 10^{130} possible sequences. The vast majority are assumed to perform no biological function. The mathematicians and physicists thought that an undirected search through such a space would not find a functional sequence in the allotted time. M. P. Schützenberger noted that randomness is never the friend of function in human codes. Murray Eden illustrated with an imaginary library evolving by random changes to a phrase. The mathematicians, physicists, and engineers said it would not succeed.
Michael Polanyi and the Irreducibility of Information
Michael Polanyi argued that DNA's information was “irreducible” to physics and chemistry. He noted that DNA conveys information through specific arrangements of nucleotide bases, but physical laws allow many other arrangements. Since chemical laws allow a vast number of possible arrangements of nucleotide bases, no specific arrangement was dictated or determined by those laws. Polanyi argued that this chemical indeterminacy allows DNA to store information and shows its irreducibility to physical-chemical laws. He stated:
\text{Suppose that the actual structure of a DNA molecule were due to the fact that the bindings of its bases were much stronger than the bindings would be for any other distribution of bases, then such a DNA molecule would have no information content. Its code-like character would be effaced by an overwhelming redundancy. […] Whatever may be the origin of a DNA configuration, it can function as a code only if its order is not due to the forces of potential energy. It must be as physically indeterminate as the sequence of words is on a printed page.}
The Mystery of Life’s Origin
As scientists doubted undirected processes' ability to produce genetic information, some considered an alternative. In 1984, Charles Thaxton, Walter Bradley, and Roger Olsen published The Mystery of Life’s Origin, proposing an intelligent cause for the origin of biological information. The book challenged chemical evolutionary explanations of the origin-of-life. Thaxton met with Dean Kenyon, co-author of Biochemical Predestination, to ensure Mystery's critiques were fair. Kenyon volunteered to write the foreword, explaining he had been moving towards Thaxton’s position. Experiments suggested simple chemicals do not arrange themselves into complex information-bearing molecules without guidance. Kenyon found Thaxton, Bradley and Olsen’s case well-reasoned and well-researched. In the foreword, he described The Mystery of Life’s Origin as “an extraordinary new analysis of an age-old question”.
The book became the best-selling advanced college-level work on chemical evolution, with endorsements from scientists and favorable reviews. Mystery critiqued all current materialistic explanations for the origin of life. It showed that the Miller-Urey experiment did not simulate early Earth conditions, the early Earth prebiotic soup was a myth, important chemical evolutionary transitions were subject to destructive interfering cross-reactions, and neither chance nor energy-flow could account for the information in biopolymers. The scientists proposed that DNA's information-bearing properties point to an intelligent cause. Drawing on Polanyi's work, they argued that chemistry and physics alone couldn't produce information any more than ink and paper could produce a book's information. They argued that our experience suggests information is the product of an intelligent cause:
\text{We have observational evidence in the present that intelligent investigators can (and do) build contrivances to channel energy down nonrandom chemical pathways to bring about some complex chemical synthesis, even gene building. May not the principle of uniformity then be used in a broader frame of consideration to suggest that DNA had an intelligent cause at the beginning?}
Mystery also claimed that intelligent causes could be scientifically considered within the historical sciences, a mode of inquiry they called origins science.
Their book marked the beginning of interest in the theory of intelligent design in the United States, inspiring younger scholars to investigate whether there is actual design in living organisms. At the time the book appeared, the author was working as a geophysicist for the Atlantic Richfield Company in Dallas where Charles Thaxton happened to live. The author later met him at a scientific conference and became intrigued with the idea he was developing about DNA. Later the author left his job as a geophysicist to pursue a Ph.D. at The University of Cambridge in the history and philosophy of science.
Of Clues and Causes
During Ph.D. research at Cambridge, the author found that historical sciences (geology, paleontology, archeology) employ a distinctive method of inquiry. Historical scientists attempt to infer past causes from present effects. Paleontologists infer a past situation or cause from present clues. William Whewell distinguished inductive sciences (physics) from palaetiology, defined by three features:
To determine “ancient condition[s]” (Whewell 1857, vol. 3: 397) or past causal events.
Explain present events (“manifest effects”) by reference to past (causal) events rather than by reference to general laws (though laws sometimes play a subsidiary role).
Utilized a distinctive mode of reasoning in which past conditions were inferred from "manifest effects" using generalizations linking present clues with past causes
Inference to the Best Explanation
This type of inference is called abductive reasoning, first described by C.S. Peirce. Abductive reasoning infers unseen facts, events, or causes in the past from clues or facts in the present.
Peirce noted a problem with abductive reasoning:
\text{If it rains, the streets will get wet. The streets are wet. Therefore, it rained.}
This syllogism commits the fallacy of affirming the consequent because there are many possible causes of a given effect. Rain, a street cleaning machine, or an uncapped fire hydrant may have caused the streets to get wet.
Peirce's answer was revealing: "Though we have not seen the man [Napoleon], yet we cannot explain what we have seen without" the hypothesis of his existence (Peirce, 1932, vol. 2: 375).
A particular abductive hypothesis can be strengthened if it can be shown to explain a result in a way that other hypotheses do not, and that it can be reasonably believed (in practice) if it explains in a way that no other hypotheses do
An abductive inference can be enhanced if it can be shown that it represents the best or the only adequate explanation of the "manifest effects" (to use Whewell's term)
To address this problem, Thomas Chamberlain delineated “the method of multiple working hypotheses.” Historical scientists weigh the evidence and what they know about various possible causes to determine which best explains the clues before them. Contemporary philosophers of science have called this the method of inference to the best explanation.
Causes Now in Operation
Historical scientists try to identify causes that are known to produce the effect in question. In making such determinations, historical scientists evaluate hypotheses against their present knowledge of cause and effect. Causes that are known to produce the effect in question are judged to be better causes than those that are not.
When historical scientists are seeking to explain events in the past, they should not invoke unknown or exotic causes, the effects of which we do not know, but instead the should cite causes that are known from our uniform experience to have the power to produce the effect in question (i.e., “causes now in operation”). Darwin subscribed to this methodological principle. A vera causa is a true or actual cause.
For example, Darwin tried to show that the process of descent with modification was the vera causa of certain kinds of patterns found among living organisms. So he proposed descent with modification as a vera causa for homologous structures. Darwin argued that our uniform experience shows that the process of descent with modification from a common ancestor is “causally adequate” or capable of producing homologous features.
And Then There Was One
Philosophers of science Michael Scriven and Elliot Sober point out that historical scientists can make inferences about the past with confidence when they discover evidence or artifacts for which there is only one cause known to be capable of producing them. The process of determining the best explanation often involves generating a list of possible hypotheses, comparing their known or theoretically plausible causal powers with respect to the relevant data, and then progressively eliminating potential but inadequate explanations until, finally, one remaining causally adequate explanation can be identified as the best.
Arguments of the form:
\text{X is antecedently necessary to Y, Y exists, Therefore, X existed}
Scientists can discover an effect for which there is only one plausible cause, they can infer the presence or action of that cause in the past with great confidence. Darwin employed this method in The Origin of Species. He argued for his theory of Universal Common Descent because it could explain already known facts better than rival hypotheses.
DNA by Design: Developing the Argument from Information
This investigation into historical scientific reasoning has to do with intelligent design, the origin of biological information and the mystery of life’s origin. The central question facing scientists trying to explain the origin of the first life was this: how did the sequence-specific digital information (stored in DNA and RNA) necessary to building the first cell arise? The next question is: what is the presently acting cause of the origin of digital information? The questions are the same: what type of cause has demonstrated the power to generate information?
Based upon both common experience and knowledge of the many failed attempts to solve the problem with “unguided” pre-biotic simulation experiments and computer simulations, the author concluded that there is only one sufficient or “presently acting” cause of the origin of such functionally-specified information. Based on experience-based understanding of the cause-and-effect structure of the world, intelligent design is the best explanation for the origin of the information necessary to build the first cell. If one applies Lyell’s uniformitarian method to the question of the origin of biological information, the evidence from molecular biology supports science argument to design.
What is Information?
Part of the historical scientific method of reasoning involves first defining what philosophers of science call the explanandum – the entity that needs to be explained. Contemporary biology had shown that the cell was, among other things, a repository of information. Origin-of-life studies had focused increasingly on trying to explain the origin of that information. The term “information” can be used to denote several theoretically distinct concepts.
In developing a case for design from the information-bearing properties of DNA, it was necessary to distinguish two key notions of information from one another: mere information carrying capacity, on the one hand, and functionally-specified information, on the other. It was important to make this distinction because the kind of information that is present in DNA (like the information present in machine code or written language) has a feature that the well- known Shannon theory of information does not encompass or describe. During the 1940s, Claude Shannon developed a mathematical theory of information that equated the amount of information transmitted with the amount of uncertainty reduced or eliminated by a series of symbols or characters.
Shannon generalized this relationship by stating that the amount of information conveyed by an event is inversely proportional to the prior probability of its occurrence. This theory applies easily to sequences of alphabetic symbols or characters that function as such. Within a given alphabet of x possible characters, the occurrence or placement of a specific character eliminates x-1 other possibilities and thus a corresponding amount of uncertainty. The greater the number of possible characters at each site, and the longer the sequence of characters, the greater is the information-carrying capacity – or Shannon information – associated with the sequence.
Molecular biologists can calculate the information-carrying capacity of DNA molecules using Shannon’s theory. Since at any given site along the DNA backbone any one of four nucleotide bases may occur with equal probability, the probability of the occurrence of a specific nucleotide at that site equals 1/4 or 0.25. The information-carrying capacity of a sequence of a specific length n can then be calculated using Shannon’s familiar expression (I = –log_2p) once one computes a probability value (p) for the occurrence of a particular sequence n nucleotides long where p = (1/4)^n.
Shannon’s theory and equations provided a powerful way to measure the amount of information that could be transmitted across a communication channel. This applied to sequences of symbols, enabling chemists to render quantitative measures of information-carrying capacity of DNA sequences. Shannon recognized it must not be confused with meaning.
As scientists applied Shannon information theory to biology it enabled them to render rough quantitative measures of the information-carrying capacity (or brute complexity or improbability) of DNA sequences and their corresponding proteins. Neverthelss, the ease with which information theory applied to molecular bi- ology (to measure information-carrying capacity) created confusion about the sense in which DNA and proteins contain “information.” Information theory strongly suggested that DNA and proteins possess vast information- carrying capacities, as defined by Shannon’s theory. When molecular biologists have de- scribed DNA as the carrier of hereditary information, however, they have meant much more than that technically limited term information.
Molecular biologists understood biological information as something more than mere complexity (or improbability). Crick and Monod also recognized that sequences of nucleotides and amino acids in functioning bio-macromolecules possessed a high degree of specificity relative to the maintenance of cellular function. Biologists have defined specificity tacitly as necessary to achieving or maintaining function. They have deter- mined that DNA base sequences are specified, not by applying information theory, but by making experimental assessments of the function of those sequences within the overall apparatus of gene expression.
In developing an argument for intelligent design I emphasized that the information in these molecules was functionally-specified and complex, not just complex. To avoid equivocation, it was necessary to distinguish:
“information content” from mere “information carrying capacity,”
“specified information” from mere “Shannon information”
“specified complexity” from mere “complexity.”
In developing an argument for intelligent design I acknowledged that merely complex or improbable phenomena or sequences might arise by undirected natural processes. I argued – based upon our uniform experience – that sequences that are both complex and functionally-specified invariably arise only from the activity of intelligent agents, and that the presence of specified information provides a hallmark or signature of a designing intelligence. To apply those to an analysis of biological systems, the author was greatly assisted in my conversations and collaboration with William Dembski, who was developing a general theory of design detection.
Darwin on Trial and Philip Johnson
In 1987, the author met Philip Johnson, a law professor, whose interest in biological origins would transform the debate over evolution. Johnson met with someone from Berkeley and also expressed his skepticism about Darwinism. Johnson's doubts about Darwinism had started with a visit to the British Natural History Museum.
Johnson began to read everything he could find on the issue: Gould, Ruse, Ridley, Dawkins, Denton and many others. What he read made him even more suspicious of evolutionary orthodoxy something that made him think Darwinists had something to hide.
An extensive examination of evolutionary literature confirmed that Darwinists reveal relies upon arguments that seemed to assume rather than demonstrate the central claim of neo-Darwinism. Meaning that life has evolved via a strictly undirected natural process. When writing popular books Darwinists employed an evasive and moralizing rhetorical style to minimize problems and belittle critics. Darwinists remained confident that all organisms had evolved naturally from simpler forms. Evolutionary biologists remain confident about neo-Darwinism, not because empirical evidence generally supports the theory, but because their perception of the rules of scientific procedure virtually prevent them from considering any alternative view. According to the National Academy of Sciences (NAS), the “most basic characteristic of science” is a “reliance upon naturalistic explanations.”
Methodological Naturalism
Johnson accepted that “methodological naturalism” as an accurate description of how much of science operates, but he argued that treating it as a normative rule when seeking to establish that natural processes alone produced life assumes the very point that neo-Darwinists are trying to establish. Johnson distinguished the meanings of the term “evolution” (such as change over time or common ancestry) from the central claim of Darwinism, namely, the claim that a purely undirected and unguided process had produced the appearance of design in living organisms
Modern Darwinists refuse to consider the possibility of design because they think the rules of science forbid it. It may be one way to win an argument, but does not demonstrate the superiority of a protected theory.
Johnson saw that the convention of methodological naturalism forced scientists into a question-begging affirmation of the proposition that life and humankind had arisen by a purposeless and natural process. The author had come to question methodological naturalism because it seemed to prevent historical scientists from considering all the possible hypotheses that might explain the evidence. Historical scientist must be allowed to compete without artificial restrictions on the competition. The book Darwin on Trial created a sensation, and some scientists shared Johnson’s skepticism about neo-Darwinism.
Darwin’s Black Box and Michael Behe
Michael Behe had doubted Darwinian evolution by reading Denton’s Evolution: A Theory in Crisis. Behe had no theological objections to Darwinian evolution. He did have serious scientific doubts, and began to investigate the evidence from his own field of biochemistry. He became skeptical that the Darwinian mechanism could produce the kind of functionally integrated complexity that characterizes the inner workings of the cell.
As his interest grew, he began teaching a freshman course on the evolution controversy. He wrote a letter defending Johnson’s new book, the two began exchanging letters and Johnson invited him to the Southern Methodist University where Johnson debated the Darwinist philosopher of science Michael Ruse. The scientists skeptical of Darwin experienced could withstand scrutiny meeting was significant because the scientists skep- tical of Darwin who were present at the debate were able to experi- ence what they already believed intellectually – they had strong arguments that could withstand high-level scrutiny from their peers. At SMU, many of the leaders of the intelligent design research community would meet together for the first time in one place. At California’s Pajaro Dunes, “the movement” congealed, where Behe and others used the listserv to test and refine the various arguments for a book
In Darwin’s Black Box, Behe pointed out that biologists have discovered an exquisite world of nanotechnology within living cells, for example bacterial cells called flagellar motors that rotate at speeds up to 100,000 rpm . The flagellar motor depends on the coordinated function of 30 protein parts. Remove one of these necessary proteins and the rotary motor simply doesn't work. This creates a problem for the Darwinian mechanism, it is “irreducibly complex.” Natural selection preserves or “selects” functional advantages. Yet the flagellar motor does not function unless all of its thirty parts are present, so natural selection preserves the motor once it has arisen as a functioning whole.
Natural selection purportedly builds complex systems from simpler structures by preserving a series of intermediate structures, each of which must perform some function. Most of the critical intermediate stages – like the 29- or 28-part version of the flagellar motor – perform no function for natural selection to preserve
Based upon our uniform experience, we know of only one type of cause that produces irreducibly complex systems –intelligence. Whenever we encounter such complex systems and we know how they arose, invariably a designing intelligence played a role
The strength of Behe's argument can be judged in part by the responses of his critics by saying natural selection builds irreducibly complex systems by “co- opting" simpler functional parts from other systems. Critics like Kenneth Miller have suggested that the flagellar motor might have arisen from the functional parts of other simpler systems or from simpler subsystems of the motor. Another smaller pump has been theorized to be the prior stage, but there is little to support the cliam that these stages would have any function for natural selection to perserve.
Analyses of the gene sequences of the two systems suggests that the flagellar motor arose first and The syringe evolved from the motor, not the motor from the syringe.
An Institutional Home
In 1996, the Center for Science and Culture was launched as part of the Seattle-based Discovery Institute. The Center began with a research fellowship program to support the research of scientists who were challenging neo-Darwinism. The Center has now become the institutional hub for an international groups of scientists and scholars who are challenging scientific materialism or developing the theory of intelligent design.
William Dembski and The Design Inference
William Dembski argued that rational agents often infer or detect the prior activity of other designing minds by the character of the effects they leave behind. Dembski’s work showed that recognizing the activity of intelligent agents constitutes a common and fully rational mode of inference. More importantly, Dembski’s work explicated criteria by which rational agents recognize the effects of other rational agents, and distinguish them from the effects of natural causes.
He argued that systems or sequences that have the joint properties of “high complexity” (or low probability) and “specification” invariably result from intelligent causes, not chance or physical-chemical laws. Dembski noted that complex sequences are those that exhibit an irregular and improbable arrangement that defies expression by a simple rule or algorithm. According to Dembski, a specification, on the other hand, is a match or correspondence between a physical system or sequence and a set of independent functional requirements or constraints.
To illustrate these concepts consider the following three sets of symbols:
“inetehnsdysk]idfawqnz,mfdifhsnmcpew,ms.s/a”
“Time and tide waits for no man.”
“ABABABABABABABABABABABABAB”
Both the first and second sequences shown above are complex because both defy reduction to a simple rule. The third sequence is not complex, but is instead highly ordered and repetitive. Of the two complex sequences, only one exemplifies a set of independent functional requirements – i.e., is specified. The second sequence (“Time and tide waits for no man”) clearly exhibits such a match between itself and the preexisting requirements of vocabulary and grammar. It has employed these conventions to express a meaningful idea. Of the three sequences above only the second (“Time and tide waits for no man”) manifests both the jointly necessary indicators of a designed system.
Dembski’s work suggested that “high information content” or “specified information” or “specified complexity” indicates prior intelligent activity. This theoretical insight comported with common, as well as scientific, experience. Few rational people would attribute hieroglyphic inscriptions to natural forces, instead they would recognize an intelligent agent. He represents it with a device he calls “the explanatory filter.” that allows scientists to decide among three different types of explanations: chance, necessity and design. It constituted a method for detecting the effects of intelligence.
Dembski’s formal method also reinforced the argument that DNA is best explained by an intelligent cause rather than by reference to chance, necessity or a combination of the two. The coding regions of the nucleotide base sequences in DNA manifest both complexity and specification.
Design Beyond Biology
Design had been studied with scientists and scholars to develop the case for intelligent design not only in biology but also in the physical sciences. It has stirred up de- bate at the highest levels of the scientific community. Design is larger than biology; molecular and cell biology have provided powerful evidence of design, but so too have chemistry, astronomy and physics.
Physicists have discovered many constants and settings that must be precisely balanced to allow for life. The constants of physics, the initial conditions of the universe and many other of its contingent features appear delicately balanced to allow for the possibility of life, known as “anthropic coincidences.