The evolution / creation debate hinges largely on a disagreement regarding the nature of science and scientific theories. Before getting into that, however, it will be important to address the common misconception that evolution and atheism are somehow two sides of the same coin. After that we will define the critical terms in this debate and summarize the basics of evolutionary theory.
The Relationship of Atheism to Evolutionary Theory.
In short, there isn’t one. Science generally, and evolutionary theory specifically, do not disprove the supernatural God of Judeo-Christianity. Evolution does, of course, undermine some of the important arguments that have been put forward for the existence of God, such as Paley’s famous argument from design (best know for its use of an analogy between the complexity of living things and the complexity of a watch), but it certainly doesn’t undermine all arguments for the existence of God, and it never will, for the simple reason that science doesn’t address the existence of supernatural entities one way or the other.
Evolution is no more atheistic than is medicine. Practitioners in both fields exclude supernatural interventions from their explanations of the phenomena they investigate. For example, you wouldn’t expect your doctor to say, “We don’t need to research your disease because we believe it’s the result of a curse from God, so your only treatment is repentance.” Just because medicine excludes supernatural explanations as a matter of method, it does not follow that medicine is therefore committed to atheism. Medical doctors are not being inconsistent when they both believe in God, and practice medicine under the working assumption that God has not jumped in to manipulate natural laws in order to create a disease or other medical phenomena. Similarly, evolutionary science also excludes supernatural explanations as a matter of method, but again, this is not equivalent to saying that evolutionists are committed to atheism. What medicine and evolution (and all the sciences) are saying is that direct intervention by God, or other supernatural beings, is assumed to be unnecessary in explaining the phenomena they investigate.
If such supernatural explanations were allowed into the methodology of science, important problems would never be pursued. For example, scientists might simply have said that polio was God’s punishment for original sin; as a result, research into its supposed ‘natural’ causes is not only unnecessary, it is positively blasphemous. Regardless of whether or not such a belief is actually true, we have two choices: we can either walk away and try to pass laws banning such heretical research practices, or we can continue the research as if there were no spiritual or magical causes involved — as a methodology. History has shown that this naturalistic methodology is regularly rewarded with deeper insights into the workings of nature, insights that have moved the human race out of the Dark Ages. One may well believe that God is the ultimate originator of these laws, and even that He may intervene on occasion for His own purposes; but in order to advance any field of knowledge, one must proceed under the working assumption that God has not and is not now involved in the areas under study. Under this approach if we are wrong and the phenomena being investigated is supernatural in origin, then at worst our research will turn out to be a waste of time; but if we are right, and a natural cause does exist, then our human knowledge expands — we go from believing that diseases are the result of some angry deity to understanding that they are a form of predation from microorganisms; this is turn allows us to go from cowering and chanting in the face of these threats to actually controlling them.
Clearly then, the scientific method does not commit one to the belief that God does not exist, or even that God does not intervene in supernatural ways. What the scientific method does is define a methodology that allows science to move forward in all those areas in which God does not intervene, and it is effective in this only because it does not assume in advance what those areas of are. Rather than assuming that God has an explanatory role until proven otherwise, the scientific method turns this around and assumes that God has no explanatory role, until it can be proven that He has. This shifting of the burden of proof, this single change in perspective, was essential to unlocking the door leading out of the Dark Ages.
Interestingly, this methodology itself amounts to a kind of experimental test of God’s explanatory role in nature. Rather than finding our scientific endeavors regularly frustrated because so many phenomena are caused by supernatural agents, we instead find that this, in fact, never happens. Even when phenomena cannot yet be explained, natural explanations can be identified that are at least as plausible as any supposed supernatural “explanation.” (This is why theists who regularly point scientific unknowns as “proof” of God’s existence find themselves on ever shrinking ground. Such arguments are often called God of the Gaps arguments, where “gaps” refers to the current gaps in our scientific knowledge.) Science has yet to regret its naturalistic working assumption, and this fact does amount to a powerful inductive argument that God simply does not have an explanatory role in the workings of the universe. However, this does not prove that God does not exist.
Questions of whether or not God, or other supernatural being(s), exists are simply not within the scope of science, but of philosophy. Science’s commitment is to a naturalistic methodology, not a naturalistic ontology (i.e., to a commitment that nature is all that exists). Many scientists do hold to this ontological view, but many scientists do not, and they cannot be accused of being inconsistent as a result. Belief in God in not incompatible with a commitment to science’s naturalistic methodology. Having this point explicitly made is, I think, something the many theistic and Christian evolutionary scientists well deserve.1
A Brief Description of Modern Evolutionary Theory.
Some common misconceptions of the theory of evolution are typified by such questions as “If man evolved from the apes, then why are there still apes?” Questions like this point to the importance of first ensuring that your opponent has at least a basic understanding of evolutionary theory. The theory of evolution is not, as is commonly assumed, equivalent to Darwinism. The theory of evolution is an interrelated set of now well-confirmed hypotheses including descent with modification (i.e., all life being related through common descent), along with natural selection, genetic drift, and genetics as the mechanisms behind the evolutionary process itself.
The descent with modification component of evolutionary theory asserts that all life forms can trace their lineages back to earlier classes of life forms in a branching, “nested” hierarchy (forming what looks like a bush or tree), which can ultimately be traced back to the beginning of life on earth (a point that would itself likely have been the culmination of a long period during which the distinction between living and non-living matter would have been difficult to make)2. During the long period since that time, changes in body plans have accumulated in diverse directions to make all the differences we now see between all life forms. To continue the tree analogy, you can start from the tip of any arbitrarily chosen twig and follow it back to the point where it joins another twig, the “common ancestor” of the two (or more) twigs. That now thicker twig can then be traced back to where it joins another thicker twig. This process can be repeated until you ultimately reach the thickest twig of all: the root of the bush. Applied to evolution, each twig represents a particular lineage; the points where there is a joining of those twigs into a thicker one represents the common ancestor of all the lineages that can be traced back to that point. Note that this descent with modification hypothesis can be tested independently of any ideas about how it happened, or the mechanism behind it. The mechanism introduced by Darwin was that of natural selection (along with the additional hypothesis that natural selection proceeded in a gradual, steady fashion).
To understand natural selection, we start by recognizing that individuals within any given breeding population are not identical. They differ from one another in slight and not-so-slight ways. If any of these various characteristics even slightly increases the odds that the individuals possessing them will survive long enough to reproduce, then the number of those individuals possessing those traits will tend to increase after each generation. This is so simply because more parents with that trait are having children than are parents without that trait. Over time, those traits will then become the “norm” for the population. As new traits and enhancements to existing traits continue to emerge, they will be similarly “selected for,” and then also become the norm for the group. This means improvements will accumulate. Alternatively, traits that lessen the relative odds of an individual’s reproducing will tend to be “selected against.”
What does it mean to say a trait is “beneficial”? In the context of evolutionary theory it means nothing more than saying it increases the odds that the individual possessing the trait will successfully reproduce. Whether or not a particular trait will help or hinder that depends completely on the environment of that individual, where environment includes such factors as competition within and between species, predators and disease, available niches, and of course, the current traits already common in the population.
Therefore, some physical trait—height, say—might be beneficial in one environment, but harmful in another.
If a particular interbreeding population, say, a species of small rodent, splits into two because of continental separation or a change in the course of a large river, for example, then each of the groups will be subject to different selection pressures (since they are in now different environments). Since the two groups are no longer interbreeding, traits increasing in frequency in one group cannot be passed to the other group, and visa versa. Over a long period of time the cumulative effect of this will lead first to the appearance of different varieties (e.g., “races”), then to entirely different species. Note how the natural selection hypothesis mentions nothing about the possible mechanisms behind it, such as genetics, and can be tested independently of them. In fact, genetics was unknown to Darwin and was not applied to his theory until well after his death. Regardless, Darwin was still able to muster overwhelming evidence in support of his hypothesis.
The discovery of genetics and its application to Darwin’s ideas of natural selection resulted in what is now referred to as neo-Darwinism or the Modern Synthesis. With the discovery of genetics, we can now understand the mechanism responsible both for the naturally occurring variability within populations, and for the ability of beneficial (and neutral) traits to be preserved while harmful traits are reduced. We now understand that this variability is the result, during reproduction, of both random recombination of genetic information, and random copying errors (including the phenomenon of gene duplication, which actually creates additional genetic material upon which selection can operate). In addition, genetics gives us the means to solve apparent counter-examples to the natural selection hypothesis, such as the persistence of Sickle Cell Anemia, despite the fact that this disease is clearly not beneficial to the people suffering from it.3 Our understanding of genetics has also allowed the theory of evolution to be extended to include the effects of random genetic drift.
All of these interrelated hypotheses are properly considered part of the theory of evolution. This understanding should make clear a number of debate-related points. First, attacks on Darwinism are not necessarily equivalent to attacks on common descent. Darwin had some additional hypotheses that are not central to evolutionary theory. For example, he felt that evolution proceeded in a slow, gradual manner (a view referred to as “gradualism”). A prediction of this hypothesis is that there should be no “explosions” in the fossil record, and that gaps in the fossil record should be the result only of the lack of opportunity for fossilization itself, and not a rarity of transitional forms. In light of the preceding outline it should now be clear that, however valid this criticism, it is not the same thing as a criticism of common descent nor even of the mechanism of natural selection; it is instead a criticism of the mode or tempo of the mechanism behind common descent.
The role of natural selection as the mechanism behind common descent is now well established as an important one. This was not always the case, however. Lamarckism was a serious contender both during Darwin’s time and in the early part of this century. Essentially, Lamarckism is the idea that traits acquired during one’s lifetime could be passed on (at least to a degree). For example, a Lamarckian explanation for the giraffe’s long neck would be that the effects of the parent’s constant straining to reach ever higher leaves would be passed on as a slightly longer neck, which would accumulate over the generations. This idea has long since been discredited (via the scientific method described below). While the role of natural selection in evolution is now known to be an important one, debate as to the relative importance of additional factors, such as genetic drift—the accumulation in random directions of neutral (i.e., neither harmful nor helpful) mutations—continues. Debate also continues as to the mode and tempo of evolution, which some argue is hardly a disagreement at all. In particular, “punctuated equilibrium” disputes neither common descent, nor natural selection, but emphasizes only that evolution often, but not always, proceeds in fits and starts; that is, evolution is characterized by relatively long periods of little change followed relatively short periods of rapid change. Importantly, this idea predicts a rarity and not an absence of transitional forms, and examples of transitional forms are many).
With this very brief introduction to evolutionary theory a number of important points can now be understood. First, evolution is a blind, unconscious mathematical property of any system of things that make imperfect (those very close) copies of themselves in an environment where the quality of the copy affects the copying rate; consequently, it can be rather easily simulated on a computer. This technique is now used in academia and industry, and is referred to variously as genetic algorithms (GA’s), genetic programming, and evolutionary programming. For example, in GA’s the ideal design of an airplane wing isn’t created by a designer, but is instead “evolved” through random mutation, recombination and selection. The results are often completely unanticipated by the creators of the GA. Such applications of evolutionary theory provide a powerful demonstration of the important roll of “chance” in the emergence of complexity and order.
Second, there is no target design toward which evolution is somehow striving—evolution has absolutely no foresight or goals (this is easily the single greatest popular misconception), which also means that there is no progressive drive toward increased complexity or intelligence. In fact, there are many examples of simple species evolving from more complex ones. Offspring either survive or they do not. Period. There is no evolutionary “planning,” and evolution cannot “see” beyond that one step: the differential survival of characteristics in one generation.
In fact, we would expect such a process to regularly lead to less-than-ideal design solutions, which is precisely what we find in nature, and in ourselves. This is due to the fact that since evolution has no foresight, it can’t go backwards to do a “top-down” redesign in order to optimally adapt to a new environment. For example, when the Panda bear adapted to a niche of eating bamboo, it couldn’t undo millions of years of carnivore evolution and evolve an ideal bamboo-eating body; instead, its wrist bone was adapted (you’ll sometimes see the term “exapted,” coined by Steven Jay Gould, to describe this kind of jury-rigging) to serve as the “thumb” it now needs to get at its new food source. What’s more is that the Panda’s carnivorous digestive tract is extremely inefficient in extracting nutrients from its new diet, so Pandas have to go through a whole lot of bamboo to get their required nutrition (be careful where you step). This basic evolutionary concept immediately puts to rest the common creationism critique that partially evolved characteristics have no use. You might hear, for example, “What good is half a wing to an animal that is waiting for a full wing?” The answer, of course, is that before it was flight worthy, and with no “idea” that it would ever be used for flight, the pre-wing structure was either used for something else entirely, or was a genetic side effect of an unrelated feature that was being used (and therefore being selected for). For example, just as the Panda’s thumb was originally a fully functional wrist bone, early feathers may well have been a functional insulation device. This creationist critique reflects only a serious misunderstanding of basic evolutionary theory: no evolutionist ever thought that at some point in history half of a modern wing was once sticking uselessly out of some poor animal’s body. Indeed, evidence of such a process would undermine, and not support, evolutionary theory.
Now, as for why there are still apes if we evolved from apes, the answer is simply that we did not evolve from today’s apes (keep in mind the tree/bush analogy), but with them share a common ancestor that was different from either humans or apes. Generally, the more similar the species the more recent is their common ancestor; the less similar, the further back is their common ancestor.
Good vs. Not-so-good Science
Before getting into questions of evidence it is extremely important to first review the nature of scientific theories and methodology. First, let’s define some terms that often lead to confusion. “Hypothesis” can be thought of as an educated guess which has yet to be confirmed through testing. An example would be what you do when the light in your room suddenly goes off. You might hypothesize as follows: “I suspect that a fuse blew, since the electric clock went off too.” Now, this hypothesis is still subject to testing. You may test it by seeing if the whole house is without power, or by seeing if your street lights are on. In the process you’re throwing out falsified hypotheses and forming and testing new ones. Once the various predictions of your hypothesis become consistently confirmed, you have what might now be a “confirmed-hypothesis.” An interrelated set of such confirmed hypotheses, models, and directly observed facts can form a “Theory,” when collectively they provide a systematically organized body of information that can be used to effectively explain and predict real world happenings. The Theory of the Atom is a clear example, as is the Theory of Evolution.
In everyday usage, “theory” is often used interchangeably with “unconfirmed hypothesis,” or even “wild guess.” It is in this everyday sense of the word that creationists will often complain that evolution is just “a theory.” But scientists also refer to the Theory of Electromagnetism, Gravitation Theory, and the Germ Theory of Disease. When they use the term “theory, it is in the “web of interrelated confirmed hypotheses” sense of the word. In the creationists’ usage of the word, it would be just as legitimate to say the “Theory of Alien Telepathy” as it is to say the “Theory of the Atom,” but of course, Alien Telepathy is not a theory at all in the sense in which scientists use the term.
Are theories ever absolutely certain? No. But theories are not equally uncertain. For example, one could claim that while germs cause disease under the microscope, no one has ever observed germs making people sick in the human body, and no one has actually seen the electrons orbiting the nucleus of an atom . (This is similar to the creationist complaint that no one can go back millions of years and observe evolution occurring.) Science is not simply about reporting what we directly observe. In fact, the whole point and value of science is to use what we can see to tell us about what we cannot see.
If theories and their hypotheses are never technically certain, how do we know that the Theory of the Atom is any stronger than the Theory of Alien Telepathy? Basically, a theory is strong if it does not contradict itself, and if it makes testable (typically unexpected) predictions that could easily falsify the theory—but, in fact, do not. Importantly, a good theory deals with its serious counterexamples (apparent evidence against the theory) through independently testable “excuses”; that is, the excuses it makes for the counterexamples can be shown to be valid without relying on the theory itself. An example would be how Newton’s failure to explain the orbit of Uranus led not to the rejection of Newton’s theory but to the independently testable “excuse” that there was an as yet undiscovered planet, Neptune, the existence of which was independently verifiable with a telescope. Instead of being a problem for Newton, The Neptune solution provided powerful independent confirmation of his theory.4 A good theory should also show that what had looked like unrelated phenomena are actually related, and it should cause us to ask new questions that we never would have thought to ask without the theory’s insights—questions that lead to even more confirmed predictions. The strongest theories spawn new productive disciplines that further confirm the theory that gave rise to them, while spawning new well confirmed theories of their own.
The importance of independent confirmation cannot be overstated. Independently confirmed predictions create a mutually reinforced “web” of confirmations that cannot be dismissed simply by casting doubt on any one confirming test. For example, when the prosecution in a courtroom presents not just one witness, but a parade of witnesses none of whom even know each other, each corroborating the same event from different, independent vantage points, they create a very powerful case. What makes it powerful is not just that the defense has to create reasonable doubt in more than one witness’ testimony. The burden on the defense is much bigger than this, much bigger than merely showing that each witness might be wrong. It is even bigger than showing that they are all wrong. The problem is making it reasonable to suppose that they all independently came up not just with a wrong answer, but with the same wrong answer—independently of each other. The odds of such event would be astronomically small.
Now even when all these criteria are met by the best scientific theories in history, those theories are never absolutely certain. In fact, it is possible, though perhaps extremely unlikely, that such a theory will be completely overturned. Far more likely, however, is that a theory with such a success record will be enhanced, built upon, perhaps even being shown to be a special case of a more general theory (just as Newton’s theories were when Einstein came along), but not thrown out. In fact, the history of science is just such a history: not a parade of theories that get overturned only to be replaced by the next generation of equally doomed theories, but a story of good theories being built upon, a story of real progress, a kind of “nested hierarchy” of its own—even despite frequent and ferocious resistance from various religious quarters.
In short, good theories work; they add real value and real insight, all of which produce real results. Given this understanding of what makes a theory good a special problem for any challenger theory should now be apparent. With the earlier courtroom example in mind, if a challenger points out that the dominant theory is unable to answer some question (which is typical of even the most successful theories), or that there is some currently unexplained counterexample, the challenger should not expect more than a ” yea, so?” unless she can show—and this is important—how her new theory not only fills in these gaps in verifiable ways, but how it can, in addition, better explain the vast body of successful predictions, independent corroboration, and explanatory power of the dominant theory. And beyond that the challenger theory has to make clear how all the mutually reinforcing independent confirmations of the dominant theory were coincidental errors on an apparently vast scale.
It is with this understanding of the nature of science, scientific theories, and evolution in particular, that we need to evaluate the arguments and the evidence for creationism and evolutionary theory.
1 The preceding subsection draws much from Robert Pennock’s very easily understood and thorough untangling of this methodology / ontology confusion, which is shared by those on both sides of the debate. Pennock also makes the important point that much of the fear that appears to motivate creationists comes from this confusion, as well as from an unfounded view that morality can be meaningful only if it comes from a supernatural being. See Robert T. Pennock’s, Tower of Babel: The Evidence Against the New Creationism (Cambridge: The MIT Press, 1999).
2 In a personal correspondence, Vincent M. Wales provided some helpful suggestions on a earlier draft of this article, including that of emphasizing the fact that evolution does not specifically require a single, common ancestor. Building on Wales’ suggestion, I would also hasten to add that the theory of evolution does not specifically address the origin of life at all, only its subsequent development. However, this is not to say that its principles are not a key part of the work being done in origin of life research, nor to say that the evidence supporting evolution is not highly suggestive of there having been just one common ancestor.
3 Genetics provides the insight by showing that the disease results from the presence of a recessive gene, and that a recessive “carrier” of the Sickle Cell gene is at an advantage in environments where malaria is present. “Recessive” in this case means that if you got the gene from only one of your parents, you will not exhibit the Sickle Cell symptoms, but you will be highly resistant to malaria; however, if you got the gene from both parents, you will suffer the full effects of the disease. With this insight, the puzzle is solved because the beneficial effects of the Sickle Cell gene in recessive carriers is selected for. Of course, this applies only when the population is being exposed to malaria. This neo-Darwinian model explains not only the disease’s persistence, but its persistence only in populations that are exposed to malaria.
4 I take this example, as well as much of these methodology of science concepts from Philip Kitcher’s, Abusing Science: The Case Against Creationism (Cambridge: The MIT Press, 1993).