Evolution: Converging Lines of Evidence

The strength of any theory comes not from a single measurement or a single confirmed prediction, but from the theory’s many predictions being confirmed by many independent tests, samples, and methods.  Often, attacks on evolutionary theory take the form of showing that some measurement technique is not infallible, or that some measurement technique depends on assumptions that could be wrong.  Examples, of course, include creationist criticisms of radiometric dating, which is a common technique for dating rocks.

When the creationist adds to these types of criticisms the charge that there are some things evolutionary theory cannot explain, it can seem to some that evolutionary theory is a weak, speculative hypothesis.  But as we will show below, scientific theories aren’t strong because the measurement techniques that confirm them are perfect, or because they have no open questions.  If this were the case, all science would be weak and speculative.

The best way to illustrate the fundamental problem with the creationist critique is by way of an analogy.  Let’s say I’m the prosecuting attorney in a felony case, and my case depends on placing the suspect in a certain location between 4:55 PM and 5:05 PM, during which the suspect shot someone.  Now let’s say twenty of my witnesses happened to look at their watches (a combination of analog and digital watches) when they saw the suspect shoot.  Seventeen other witnesses separately recall seeing the suspect shoot just as the 5 o’clock news was coming on in each of their apartments in the neighboring building.  Twenty six other witnesses saw the suspect shoot just as the  5 o’clock whistle blew at the nearby quarry.

Now, imagine the defense carefully pointing out that watches have been known to be wrong.  Watches, he stresses to the jury, are not infallible.   He proudly brings in people to testify that they have made mistakes at some point in their lives about the time of day because their watch’s battery had run down, or their watch had gotten wet, or damaged by various means.  As for the 5 o’clock news, the defense points out that scheduling errors in TV programming have been known to cause shows to come on more than five minutes after or five minutes before their scheduled time, and he even finds a TV producer to testify about a time that this did, in fact, happen.  And as for the quarry, the defense was able to find several expert witnesses (people who blow whistles at quarries) to testify that there had been times when they blew the whistle at more than five minutes before or after the scheduled time, at each of their respective quarries.

As is obvious in this example, it is not enough for the defense to show that the methods used by each of the prosecution’s witnesses are fallible ways of telling the time.  Not only is it extremely unlikely that something was wrong with everyone’s measurement of the same event, but even if everyone were in error, we would hardly expect them to be in agreement with each other—especially when the measurements involve unrelated technologies and methods.  Creationist criticisms of the evidence that support evolutionary theory or modern geology (e.g., dating rock samples) are often analogous to this defense attorney’s criticism of the evidence supporting the prosecution’s case.  To take the defense’s (and creationists’) criticisms seriously would literally mean that we could not tell time at all simply because watches are individually imperfect.  But in science that fallibility has been specifically taken into account through the use of multiple samples and multiple independent measurement methods.  Just like our imaginary prosecutor, scientists look for independent corroboration before they consider a theory to be well supported.

In the case of rock dating, and in all significant aspects of evolutionary theory, we have just this type of wide agreement spanning many different methods and many different samples (far more samples and methods, in fact, than described in the courtroom analogy).  This first article will focus on just five of the many independent, corroborating lines of evidence that confirm evolutionary theory, with a focus on one of its key hypotheses: Descent with Modification.  The definition of this hypothesis (taken from the subject refresher article on Evolution in Freethought Debater) follows:

The descent with modification component of evolutionary theory is that all life forms can trace their lineages back to earlier classes of life forms in a branching, “nested” hierarchy (forming what looks like a bush or tree, as in the “Tree of Life”), which can ultimately be traced back to the beginning of life on earth…During the long period since that time, changes in body plans have accumulated in diverse directions to make all the differences we now see between all life forms.  To continue the tree analogy, you can start from the tip of any arbitrarily chosen twig and follow it back to the point where it joins another twig, the point of origin of the two (or more) twigs. That now thicker twig or branch can then be traced back to where it joins another branch.  This process can be repeated until you ultimately reach the trunk.  Applied to evolution, each twig and branch represents a particular lineage; the points where there is a joining of those twigs and branches represents the common ancestor of all the lineages that can be traced back to that point.

The importance of recognizing the cross checking nature of science cannot be overstated.  When one forgets that this is the basis of good theory, one can quickly lose sight of the “forest for the trees,” and literally end up haggling over irrelevant individual cases in which a result was unexpected or inconsistent with the theory.  However, some exceptions are, in fact, expected so long as they can be shown to be “outlier” results (random or statistically rare errors) that do not significantly affect the conclusion drawn from all of the data taken as a whole.  This is exactly why statistics plays such a large role in analyzing scientific data, and why scientific results are usually stated along with a statistical level of certainty.

It is perhaps understandable why creationists make these sort of irrelevant criticisms of outlier results.  Unlike actual science, creationism does “not seek organizational relationships or look for relationships in terms of universal physical laws.”1  Instead, by definition, they exist to defend a point of view—a view rooted in absolute certainty in an unchanging Truth (with a capital T).

The lines of evidence we will briefly look at are the fossil record in the geologic column; the classification of living forms based on comparative anatomy; biochemistry; embryology; and finally, biogeography.  As you review the evidence notice how the independently corroborating nature of the different lines of evidence precludes any appeal to “there was this one fossil they couldn’t explain” or “someone once made a mistake in classifying an animal,” or even, “there was once this hoax. . .”  As in the court case example, the proof is in the wide statistical agreement of many measurements spanning all of these methods, not in any single data point.

Fossil Record

The geologic column, which is the identification and classification of the different rock layers (strata), was essentially completed by 1815, almost 50 years before Darwin’s theory.  Importantly, this work was done by the creationists of the time.  They noticed that each rock strata contained a distinct collection of animals and plants.  So unique and consistent were these fossil collections in each of the layers, that certain fossils in them could then be matched to fossils in other continents to locate that layer within the geologic column (index fossils).  It is important to notice that this all predates Darwin and is completely independent of any assumptions about evolution.  Inexplicably, a common creationist complaint is that the geologic column presupposes the truth of evolution, which in turn presupposes the truth of the geologic column, resulting in a circular argument for evolution.  But as this very short history should already make clear, the geologic column makes no assumptions about evolution since it was established by before there even was a theory of evolution.

By comparing the fossil contents from the lower layers (or strata) to the higher ones, these pre-Darwin creationists (such as Cuvier, the father of paleontology and a deeply religious creationist) could see a pattern in which an individual life form would appear in one layer and then be replaced in the next layer up by multiple variations on that original, single form.  These layers were often themselves followed by layers in which all or most of those variations (and variations of variations) suddenly and completely disappeared.  These creationists also noticed that the deeper the fossil, the less recognizable it was.  As one moved higher, the forms became increasingly recognizable.  As these scientists examined more and more strata, this cycle of emergence followed by branching variations continued.

Again, this pattern was recognized before Darwin’s theory, though the pattern was assumed not to reflect any kind of relationship through descent.   Based on their observations, these creationists (and basically all geologists were creationists at this time) concluded that God must have inflicted not one, but a series of cataclysms, each followed by a new creation.  However, as they collected ever more data, they realized that there had to have been ever more of these cataclysm and creation events.

To recap, at this point in pre-Darwinian history the geologic column had been established with the lower layers understood to be older than the higher layers.  The actual ages of the layers remained unknown, however, but were assumed to be consistent with a literal reading of the Bible.  Also, at this point in history, it was apparent that a slice through the earth—like cutting through a layer cake to reveal the layers—revealed not a hodgepodge mix of living and extinct forms (as one might expect from a world-wide deluge), but an extremely ordered and consistent pattern of fossils throughout the world.  That ordered and consistent pattern was that a form would appear in one layer, and in higher layers that same form would be replaced by similar though different forms, which became progressively more different in even higher strata.  This typically ended in the wide-spread disappearance of many of the forms, and then this cycle of appearance followed by a kind of “radiation” would start over.  One thing that was also apparent even then was that the higher the layer in which a fossil was found, the more recognizable the fossil usually was, while at lower layers the forms were less recognizable, and harder to tell apart and categorize.  For example, it is easy to tell mammals from reptiles today, but if one goes deep enough, mammals and reptiles become essentially indistinguishable (e.g.,  you find groups like the Therapsids, which were one of the mammal-like reptiles that blended features of both groups).  Creationists, like Cuvier, argued that the data could be explained by a series of divine cataclysms and creations, the last of which was the Biblical Flood.

Classification of Living Forms

Setting aside fossils altogether for a moment, but keeping the observed pattern we saw in mind, let us separately look at what happens when we classify today’s living forms based on their physical forms and structures (called “morphology”).  A fair question to ask before classifying living things is, “What characteristics should we use: size, weight, what?”  Well, the pre-Darwinian, Linnaeus, who came up with the animal naming system we still use today, grouped animals by overall large-scale anatomical similarity, though this left some room for arbitrary decisions about what should be considered “similar.”  Later, the anti-Darwinian, Richard Owen, argued that if a feature could be shown to be the same structure modified for different purposes—as revealed by comparative anatomy and embryological development–then the animals should be classified closer together.  The more such structures were shared, the closer would be the classification.  He called such structures “homologous,” while structures that looked superficially the same, but were based on completely different structures and embryological development, he called “analogous.”

This homology / analogy distinction works very well because by using it one can predict other things that the animals would also have in common.  This approach allows one to gain new insights.  If you had classified by size alone, say, you wouldn’t gain any additional insights, and would know little more than the just size of the animal.   It’s interesting to ask why this approach makes for such effective predictions.  For Owen, and the creationist establishment of the time, the answer was that homologous structures revealed part of God’s plan:  He used relatively few basic templates that He modified to create all the species.  Different species that shared homologous structures were based on the same template, but each had customized modifications to meet the functional needs of the individual “kinds.”

Owen’s approach, however, also produces observations that conflict with expectations based on God’s having created each kind directly.  If each kind were separately created, then there would be no restriction preventing God from mixing and matching useful structures.  If God designed a structure to serve a purpose, then all species could benefit from that originally perfect design.  There would be no constraint on God that says once He designs a useful structure, He can only give it to other species that happen to share with that first species a set of completely unrelated characteristics.  For example, there would not be a rule that says God can only give three middle ear bones to species that have milk-producing (mammary) glands.

An all-powerful Designer should have at least as much flexibility as human designers.  Things designed and manufactured by humans show no such restrictions.  Design ideas are shared across widely different “kinds” of human creations.  This is quickly apparent when you try to create a classification of modern computers or aircraft based on their shared components.  For example, Global Positioning Satellite (GPS) technology can now be found on helicopters, biplane crop dusters, high performance fighter aircraft, and even rental cars and fishing boats.  If you had earlier created a hierarchical classification that had winged aircraft as the basis for one branch and wingless (e.g., helicopter) aircraft as the basis for another, you would now have to add GPS to both branches.  In other words, GPS technology would not be nested within just one branch, but would cut across the branches.  As a result, one cannot create a stable, hierarchical classification for human-created things in which sets of characteristics are nested one within the other.

On the other hand, if each species is related through common descent, and not individually created, then we would expect that a classification of them based on shared structures would reveal just such a nested hierarchy of structures.  We would expect this since the structural inheritance of each branch is different and any new structures have to be built from the materials at hand; that is, they have to derive from these inherited “components.”  As a result, any new structures would be confined to the one lineage in which it first appeared, and to its descendant branches.  Consequently, we wouldn’t expect to see structures cutting across the branches, as we did in the GPS example.

When we classify living life forms we see the nested hierarchy predicted by Descent with Modification.  For example, consider the following classification (cladogram)2 of a few representative, but disparate forms:

Note the pattern in the sharing of characteristics.  They are not mixed and matched, but are “nested” one within the other.  For example, within the whole group all forms share a chambered heart; nested within those having a chambered heart is a group that additionally has a vertebral column; within the group that has both a chambered heart and vertebral column are nested those that additionally have mammary glands (i.e., you only get mammary glands if you have a chambered heart and vertebral column).  The characteristics appearing at a branch point are confined to all the branches above it; they never cut across to other branches.  This nested pattern is very characteristic of all life on earth.

Descent with Modification also makes some additional, more specific predictions about the morphology of living forms.  First, since species can only work within their structural inheritance to solve adaptive problems, we would expect that as descendant species in one lineage branch out into different niches (some that involve flying and others swimming, for example), that their adaptations will involve different modifications of the same underlying structures.  These are the homologies we discussed earlier, and they are a specific prediction of the Descent with Modification hypothesis.  A corollary prediction of this view is that we would never actually find the “winged horse” seen in mythology, since that would involve entirely new appendages appearing “out of thin air,” as it were, rather than coming from existing structures.  This is why, for example, flying birds and mammals had to modify two of their four inherited appendages to get wings; that is, they had to modify their “arms” to make wings.

Second, where species from different lineages find themselves filling the same environmental niche (for example, living in different parts of the world and filling the bamboo-eating niche) we would expect to see cases of different structures being used for similar functions (as opposed to similar structures being used for different functions in the homology example); that is, we would expect to see different lineages solve the same adaptive problems with different structures—structures which reflect each lineage’s unique inheritance.  Such cases are examples of “analogies,” (or “convergent evolution” as it’s often called today) and are also predicted by Descent with Modification.  But a further interesting prediction of this theory is that some analogous structures will be inferior to others in fulfilling similar functions.  This is expected under this theory, since the inherited components of some lineages will make better designs easier to “get to” than the components inherited in other lineages.

In the case of homologies, we see exactly what is predicted.  For example, the forelimbs of all mammals are composed of the same bones arranged in similar ways.  It’s only the proportions of these bones that differ.  For example, compare the bones of bat, human, and dolphin.  While each fills a very different niche, and uses these bones for very different functions, they all have the scapula, humerus (upper arm), radius (forearm bone 1), ulna (forearm bone 2), carpals (wrist), metacarpals (hand), and phalanges (fingers).  Again, only their proportions differ.  Creationists often say that there are similar structures for similar functions, but if that’s true, then why does the bat’s wing have far more structural similarity with the human hand than it does with a bird’s wing?

In the case of analogies we see again exactly what is predicted.  For example, the shark (a fish), Ichthyosaur (an extinct swimming reptile), penguin (a bird), and dolphin (a mammal) all have forelimbs adapted for swimming.  Outwardly, these forelimbs look very, very similar; however, the internal structures are radically different: the dolphin’s fin has more in common with the bat’s wing and the human hand (including the same five finger bones) than it does with the Shark’s purely cartilaginous fin (no bones at all).  This makes sense in the evolutionary context given that the genetic inheritance in the fish lineage didn’t have bone to work with when the sharks first appeared.  So, again, the Creationist claim of “similar structures for similar functions” is falsified.  What we typically see is different structures for similar functions and similar structures for different functions.

The Australian marsupials are a particularly startling example of this kind of convergent evolution (different structures adapted to solve the same functional needs).  There are marsupial versions of mice, flying squirrels, moles, ground hogs, rabbits and wolves, just to name a few.  As different as they are from each other, they still have more in common with each other than to any placental mammal.  For example, the marsupial mouse has more in common with the marsupial wolf than it does with the placental mouse with which we are all familiar.  (This Australian example is discussed in more detail below.)

Owen’s supernatural “cost-control” explanation for these patterns is an ad hoc theory tacked on after the fact to explain data that contradicts the predictions of a theory based on an all-powerful Designer creating each “kind” separately.  Worse, his explanation requires one to contradict God’s omnipotence and omniscience.  People such as Owen apparently felt that God used relatively few templates for reasons of efficiency in the design process, in the same way that GM, for example, would not want to design each car from scratch due its being an inefficient use of limited and expensive resources.  Of course, it is entirely unclear how one is to reconcile this explanation with the notion of an all-powerful, all-knowing, and perfect Creator, who presumably isn’t operating under such resource constraints.  Unless there was a second act of creation after the “Fall” (one in which inferior designs were introduced) then one can only conclude that many “kinds” were rendered imperfect from the very beginning (i.e., before the Fall), since many of the analogous structures we see in the different “kinds” are not equally effective in serving similar functions.

An all-powerful and perfect creator who created each kind directly would be expected to have originally used the optimal design when the same function was needed.  If any inefficient functionality that we see today was due to “degeneration” following the “Fall,” then this would appear as resulting from defects appearing in a common structure, not from the use entirely different structures, which would reflect a choice that that had been made from the very beginning.

Relationship to Fossil Record and Descent with Modification

Now, let’s go back to the fossil record.  If the nested hierarchy we see in living forms reveals the particular design approach of a Creator who made all “kinds” at the same time, then there should be no connection between the characteristics of life forms (like having 3 ear bones) and the depth of the rock layers in which these characteristics first appear.  In other words, if all species were created at the same time, then there should be no correlation between elapsed time and the appearance of characteristics that define each species.

On the other hand, if all life is related through the process of descent with modification, then there should be a very specific correlation between time and characteristics; which is to say, there should be a correlation between the depth of rock strata, and the first appearance of structural characteristics in the fossils of those layers.  How should they correlate? Remember that a structural characteristic that is shared by more branches, such as jaws or a vertebral column, should first appear in layers that are deeper than those containing the first appearance of any nested characteristic.  For example, the earliest appearance of animals possessing 3-ear bones (ossicles) should appear higher in the rock layers (i.e., in younger rock formations) , than the earliest appearance of animals with vertebral columns.

This pattern is exactly what is found in the fossil record.  Referring back to our earlier tree with the lamprey, perch, lizard, mouse, and cat (which was made independently of fossils) what we find in the fossil record is that the jawless fishes (agnaths) first appeared in the deepest layers, followed in higher layers by the first appearance of bony fishes, followed in yet higher layers by the first appearance of reptiles, which in still higher layers were followed by the first appearance of mammals.  The theory of Descent with Modification not only predicts this, but is alone in being able to explain it in a productive way (that is, based on a theory with testable predictions and not by invoking Divine, mystical, or magical causes and purposes, which can be made to fit any data—which, of course, means it can explain no data).

Note that I said “first” appearance.  Obviously jawless fishes are still with us, as are the reptiles, and even the much earlier forms of life like blue-green algae.  Too often one hears comments like, “…but if we evolved from them, then why are they still here.”  It should be clear from the discussion so far that such a view represents a serious misunderstanding of descent with modification and evolution.  A parent species does not have to go extinct before a child species can emerge.  To take just one simple example, a child species can emerge when the parent species is split into two by the disappearance of, say, a land bridge, creating two islands where there had been just one.  The two groups might now diverge into two species, though one of the species may show almost no change from the original population, while the other shows dramatic change.  This could be the result of very different environmental pressures on each of the two groups.

To recap so far, we see that a tree of life made from comparative anatomy of living forms, which has nothing to do with timing or fossils, independently correlates to a tree of life based only on the appearance of characteristics as you go from deeper to shallower rock strata.

Biochemistry

Now if Descent with Modification is the right explanation of the these two independent phenomena, (i.e., the pattern in fossil record and the pattern in living organisms) then any newly discovered features should continue to corroborate these same patterns, while perhaps adding even more detail and filling in open questions.

Since Darwin’s time, the life and geological sciences have exploded, complete with the emergence of whole new sciences that Darwin could never have imagined, such as molecular genetics, plate tectonics, and geochronology (rock dating). Any good falsifiable theory should not only hold up under the scrutiny of these new sciences, but thrive and contribute to this progress of knowledge.  Looking at the new field of molecular genetics, what happens if we try to make yet another “tree of life” diagram, but based only on DNA, independent of anatomy and independent of the fossil record?  If we end up with a tree of life that is completely different from the diagrams we got based on fossils and comparative anatomy,  then Descent with Modification will have a big problem.

DNA

Briefly, DNA is a molecule on which is strung a code made up of just a four-letter alphabet.  For example, part of the whole string might read, “…GCCTTACGGA…,” where the whole string is actually about 3 billion letters long in human DNA.  When that code is read across the entire DNA, one ends up with a recipe for a growing a life form.  This four-letter alphabet really represents the four chemical “bases,” each of whose initials are A, T,G, and C (adenine, thymine, guanine, and cytosine).

Until the DNA copies itself (a process called “replication”), this string of letters exists as one side of a two-sided zipper-like structure called a double helix.  The other side of the zipper is the mirror image of the left, where T is the mirror of A, and C is the mirror of G.  When a cell divides, this zipper unzips into two halves.  Each half then builds a new sister half, letter by letter:  Ts attract As, As attract Ts, etc.  When all is done and said, we end up with two complete “zippers,” each identical to the original single zipper before it split.

Now, each group of three letters makes a “word,” (e.g., CCA is a word, or UCU, etc.).  These three-letter words are called “codons” and come in two basic types: instructions for building things, or exons; and regulatory instructions, or introns, which can sometimes be thought of as punctuation marks or “stop reading,” “start reading” instructions.  In the case of exons, each three-letter word corresponds to a particular building block of a protein molecule.  These building blocks are called amino acids. A protein is essentially a string of amino acids in a particular sequence all hooked up end-to-end.  Proteins in living things are made up of very long sequences of just 20 different types of amino acids.  The DNA code for making the whole protein is an example of a “gene,” each of which can be thousands of letters long.

A genetic mutation amounts to A, U, G, or C (actually the U stands for urasil, which is used instead of T in RNA, where RNA is an intermediate step before the protein-building process) being accidentally “flipped” to something else, or having a one or more letters deleted, added, reversed, or put somewhere else in the sequence.  For example, if the word UCA becomes UCG, then there is no effect since they both code for the same amino acid: serine.  However, if the first position is flipped, turning UCA into GCA, then the amino acid in this part of the completed protein will be alanine instead of serine.  This change to the protein may or may not affect the way it functions or the final “fitness” of the completely grown life form.

Some mutations may prevent the protein from even forming at all, such as a mutation affecting one of the “punctuation marks.”  If this happens, the entire gene of some thousands of those letters (A,C, T, U) are rendered useless, or “dead.”  (Of course, from the standpoint of affecting offspring, and therefore evolution, the only mutations that matter are those that affect the DNA copies that go into the sex cells, since only they are going to be used to grow another organism.)

In the interest of space, I’ve tried to limit this introduction to genetics to only those points bearing directly upon our discussion.  Hopefully I’ve described enough of it to make the significance of the next point clear: only around 5% of DNA actually codes for building anything, the rest are the introns along with a vast amount of dead or “pseudo” genes.  This means that most random mutations will have no impact because they will land in this vast sea of ignored dead genes.  (All introns, including these dead genes, are actually cut out to leave only the exons just before the instructions are read prior to building proteins.)

Keeping in mind that some 95% of the genetic code is nonfunctional, and also keeping in mind that random mutations appear on a fairly regular basis, we can expect that mutations affecting these dead areas of the DNA will tend to spread throughout the breeding population.  The reason they will tend to spread is that specific mutations are very rare, and they have no effect on survivability (since the dead genes don’t do anything).  Consequently, as the creature possessing this mutation reproduces, it will likely pass it on and it will multiply over the generations.  If one could find very old DNA of a particular species, and compare it to new DNA from that same species, one would expect the newer DNA to show an accumulation of these types of mutations.  Even more importantly, if that earlier population had split into two, say by a land bridge being cut creating two islands from one, then each of the two descendant populations should be accumulating mutations independently of each other—getting more and more different from each other over time—since mutations in one group cannot spread to the other group.  The bigger the accumulated difference, the longer the time they have been separated.

Earlier we created a tree based on comparative anatomy, and saw that it corresponded closely to a tree independently built from the fossil record.  In that example, we looked at representatives from disparate branches, but similar anatomical and fossil classifications of just the primates invariably show that the great apes are very close to each other and to humans, with monkeys being more distantly related, and lemurs further still  (i.e., “more distantly related” means it represents an earlier split, in the same way that the perch, in our earlier diagram, represents a lineage that split off prior to the lizard’s lineage).

With our understanding of genetic mutation and of the large amount of non-functional DNA we can make yet another “tree of life.”  One way to build a tree based on genetic differences is to “unzip” the chromosomes of different species, and take the left half of one species and see how well it bonds to the right half of the other species.    Naturally the bonding will be 100% for the same species.  The greater the genetic similarity, the tighter the bond (measured by heat required to re-separate them).  Here are the results from this DNA-DNA binding technique 3:

 

Species Percent DNA Binding
Human 100
Chimpanzee 100
Gibbon 94
Rhesus monkey 88
Tarsier 65
Lemur 47
Mouse 21
Chicken 10

This certainly isn’t expected if each kind were separately created, especially when you consider that the vast majority of the sequences have nothing to do with function.  The dead genes, however, provide much more powerful corroboration of evolution than does this impressive result.

Shared Typographical Errors

The discovery of the high proportion of dead or “pseudo” genes provides corroboration of the Descent with Modification hypothesis in a startlingly different way.  To see how, let’s start with an analogy.

Imagine you are a teacher grading essays.  The essays are each a response to the same question you posed to the whole class.  You, of course, made clear to the students that they are to work separately, and that they are not to copy each other’s work.  So how do you tell whether matching passages in some of the returned essays were plagiarized or were just the result of a coincidentally similar choice of words in response to the same question?  What if 30 consecutive words were identical?  Well, that certainly is not likely, but what if in addition all the punctuation marks also matched exactly?  Well, now things are looking even worse for student honesty; but what if in addition to all of that, grammatical, spelling, and punctuation errors match exactly between the two passages and in exactly the same locations?  Well now the odds of a coincidence are so small as to be considered zero.  An extension of this kind of analysis can also reveal whether one person copied from another, or if two students each separately copied from the same third source, and whether each of those “child” copies were themselves copied by others, etc.  The “junk” genes, of which so much of DNA is made, allow just this type of analysis on gene “copying” between species.

As mentioned earlier, mutations can prevent a gene from making anything at all.  A gene is just a long run of code (often made up of thousands of the code letters A, T, C, and G) that when read produces, for example, an entire protein.  If mutations occur at critical locations , then that protein may not be made at all.  Such a mutation would render the whole string of code, of some thousands of letters, “dead.”  Since only a few errors in a string of thousands can make a gene non-functional, then an analysis of the string can often reveal what the gene originally coded for.

For example, humans and the other primates require vitamin C in their diets.  In humans and primates, vitamin C (ascorbic acid) deprivation will lead to debilitating diseases such as scurvy.  “So what?” you ask?  Well, this dependence on vitamin C is the exception among mammals, since, besides primates and the guinea pig, other mammalian species produce an enzyme protein that allows their bodies to synthesize their own vitamin C.  This enzyme protein is called LGGLO (which, if you must know, stands for L-gulono-gamma-lactone-oxidase).  Now, the LGGLO gene that codes for this protein has been identified.

Under the evolutionary hypothesis all mammals inherited the LGGLO gene from a common ancestor.  Any mutation that would render this gene nonfunctional in any of this common ancestor’s descendant species would be pretty rare, and when it did occur it would be fatal unless the gene were no longer needed, such as if the diets of those affected species just happened to be rich in vitamin C.  Now what if one of these descendant species develops a mutation in the LGGLO gene that “kills” the gene—that is, makes it a dead gene?  Well, as I mentioned above, this would only be a problem if the diet of this species is not rich in vitamin C.  Bu what if this species with a dead LGGLO does have a diet rich in vitamin C, and it branches into multiple descendant species of its own over the course of evolution?  Well, now we have a testable prediction.   If Descent with Modification is true, we would expect that this same dead gene would appear in each of those descendant species.  In the case of the primates those descendant species are alive today, and include us.  The prediction is confirmed: that broken relic of a once working LGGLO gene has indeed been found in humans and in the other primates. 4

Now humans and the other primates are believed to share a recent common ancestor based on an enormous array of other converging lines of evidence that have nothing to do with the LGGLO gene—such as morphological, other genetic, molecular, and fossil lines of evidence.  So the additional fact that humans and primates as a group need vitamin C in their diets, and the fact that this condition is extremely rare among the mammals, suggests yet another very specific testable prediction—namely, that the particular genetic “typo” that makes this LGGLO gene “dead” would be the exact same typo in humans as it is in the other primates.  The reason we would expect this is that any number of potential defects can “kill” a gene, so if the defect occurred independently in each primate, then it would be extremely unlikely to be the result of the same typo each time—and it would certainly be completely mysterious why such a cluster of independent events would target the primates as a group when it is extremely rare among all mammals.  This “identical typo” prediction for the primates/human relationship has now been confirmed:  “A small section of the GLO pseudogene sequence was recently compared from human, chimpanzee, macaque and orangutan; all four pseudogenes were found to share a common crippling single nucleotide deletion that would cause the remainder of the protein to be translated in the wrong triplet reading frame.” 5

It is important to see that the strength of this prediction in no way depends on primates being the only group of species with a dead LGGLO gene.  The strength of the prediction comes from two powerful facts: first, that the mutation is extremely rare among mammals but found in all primates as a group (why would this be if the primates are not related?); and second, that for primates as a group the gene is not just dead, but dead from precisely the same genetic typographical error.  Now it shouldn’t be very surprising if we find a dead LGGLO gene in some other mammalian species that does not share a common ancestor with the primates, so long as it is a rare find.  This is so because it is certainly conceivable for a mutation that kills the LGGLO gene to appear independently in another lineage; however, the strong expectation is that if it did occur independently, then the genetic typo involved would be a different typo from the one found in the primates. This is the expectation, for example, in the case of the guinea pig, which is an example of another rare occurrence of a mammalian species with a dead LGGLO gene.

So the LGGLO case is yet another powerful independent line of evidence that converges with so many others.  The creationist has to explain not just how each of these lines of evidence might be wrong, but how, if they are wrong, they all point to the same answer.  Convergence makes appeals to potential errors extremely implausible since it relies on either a fantastic coincidence (that all these errors are just coincidentally consistent with a single theory), or a fantastic conspiracy (that all international universities have been secretly colluding to hide the truth for over a century).  If the LGGLO case is an example of degeneration after the Fall, then why did it just happen to primates and humans as a group in such a way as to independently corroborate that identical grouping constructed from all the other independent sources already mentioned (fossil layering, morphology, etc.) —was it to “test our faith”?

A functional gene that is rendered useless and then persists as a dead gene in the population is rare because most such errors are fatal or horribly debilitating (In the case of LGGLO, the primate diet was rich in vitamin C, so its loss was not a disadvantage).  A more common error found is that of gene duplication, which occurs during the DNA replication process.  It is as if in copying a book someone copies the same paragraph or page twice.  Since one gene is functional, and the copy usually defective and nonfunctional (i.e., dead), mutations can accumulate in the defective copy without affecting the animal carrying it, since they have a second working copy.  Consequently, more of this type of dead gene is found than is the other type.

One example (of many) of duplicated dead genes is the gene that codes for an enzyme that is involved in the metabolism of steroid hormones.  Right next to this gene in humans is a defective copy of the same gene; that is, it’s a non-functional, “dead” copy of the still functional gene right next to it.  Many mutations can render the copy dead; in this case, the particular typo that ruins this gene copy is the deletion of a particular string of just 8 letters (out of a much, much larger number of letters).  Now, Chimpanzees have the same dead gene and here is the “smoking gun”:  Chimpanzees have the exact same 8 letter deletion.6  This is important: they don’t just have the same dead gene, they have the same typographical error that we do in a very large “book” of letters.

Any appeal to “similar design for similar function” is irrelevant here since these are non-functional errors.  Any appeal to “degeneration since the Fall” is also irrelevant, unless one is to assume that God intervenes directly to cause this degeneration by first creating redundant copies of a gene and then targeting particular letters out of many thousands in the genetic code, and only in those animals that are grouped closely together based on comparative anatomy and fossil layering.  Keep in mind that this is only one of many examples of specific typos in the same locations of dead genes that are shared between humans and primates.

Of course, God can do anything (which is why saying “Because God did it that way” explains nothing), but if He did, it only serves to create false corroboration of evolution.  If we allow for that type of explanation, then we can just as easily accept that the universe was created ten minutes ago with our memories in tact, and that all evidence to the contrary is either a test, or just “because God did it that way for His reasons, which we are too lowly to comprehend”).  Again, not only can anything be believed under such an approach, but it also denies us any insights in to the workings of nature since it undermines all of science.

What else did all those dead genes code for?

Those dead and dormant genes code for some strange things, indeed—but unsurprising from an evolutionary framework.  For example, embryonic tissue from the jaw of a chicken was induced to grow teeth.7  Please pause and consider the significance of this: genes for teeth are in the chicken’s DNA, but ignored because the chemical signal that activates them no longer occurs.  Of course, this experiment proves that the genes for teeth are in there.  Why a creator would give chickens the genes for teeth, but keep them turned off, is something difficult to image without the help of a qualified creationist.

To recap up to this point, we’ve seen that a conclusion of descent with modification is corroborated by a tree of life based only comparative anatomy; and again independently corroborated by a tree based only on the position of extinct species in the fossil layers; and again independently corroborated by a tree based only on DNA, which is made up largely of code that doesn’t do anything; and again independently corroborated by the pattern of shared identical typographical errors in the dead genes found in DNA; and again independently corroborated by the kinds of things those dead genes used to code for—things like teeth in chickens, which fits with where chickens are in a tree of life with respect to their toothed, reptilian  ancestors.

Embryology

Earlier I described DNA as a kind of recipe.  This analogy is apt because the form of the final organism depends on the precise timing of various other genes during embryological development.  Indeed, embryological structures are an important factor in identifying homologous structures (as they were for the earlier taxonomists such as the anti-Darwinian, Richard Owen, mentioned above).

A gene typically doesn’t create an anatomical feature all by itself; instead, the feature arises from the action of the gene working in concert with many other genes, and with each operating under a complex schedule of timing.  Modifications of genes that control timing can cause some features to be suppressed, others to be dramatically modified, and still others that are partially developed at one stage of embryological developed to be completely erased at a later stage.  Because genes work in this manner, we would expect that the developing embryo will sometimes reveal certain aspects of its evolutionary past.

Examples of this include the whalebone whale and the anteater, both of whom develop teeth in an early embryological stage only to reabsorb them in a later embryological stage before birth.8 (Actually, this example also demonstrates the presence of teeth genes in yet more toothless animals, like the chicken.)  Terrestrial salamanders at one stage develop both fins and gills, but then lose them before hatching.

Not only do examples like this show the presence of silent genes for characteristics alien to the definition of the species in which we find them, but these dormant traits are consistent with the placement of these animals in the tree of life as having descended from species that did express these traits: birds descended from toothed reptiles, amphibians descending from an ancestral fish, etc.

Of course, this embryological process may not cause complete disappearance of ancestral traits as it did in the above examples.  When they don’t completely disappear, but are still nonfunctional, the structures are described as “vestigial.”  Flightless beetles, for example, have wings that remain forever sealed under permanently fused wing covers.  Even Darwin commented on many such examples, including that of the rudimentary hind legs found in Boa Constrictors.  Seeming to anticipate that some might claim these to have an as yet unknown function, Darwin asked, “why…have they not been retained by other snakes, which do not possess even a vestige of these same bones?”9

Biogeography

This final section is perhaps the most straightforward and certainly one of the most persuasive stand-alone bits of evidence that support Descent with Modification (in other words, even if it weren’t corroborated by all the independent lines of evidence we’ve discussed so far).  Based on Descent with Modification, if one species is the descendant of another, then there had to be some geographical continuity from where the parent species is found to where the child species is found—they had to be able to get there.

Of course, if this geographical continuity were broken at some point in the past, then there are predictable consequences—but only if Descent with Modification is true.  Without going into the many examples of biodiversity that support Descent with Modification, I will focus only on the Australia example, since it alone is such an overwhelmingly persuasive example—particularly against any notion that all of today’s air-breathing species came from one point on the globe, such as from an “Ark.”

Deeper layers of the fossil record show that marsupial mammals (pouched mammals like the kangaroo) were more common than placental mammals (mammals like us that gestate their young inside their bodies with the use of a placenta).  During this time (i.e., in these layers) some parts of the world were populated only by marsupial mammals, including the land mass that would eventually become Australia.  Shallower (more recent) layers of the fossil record show that placental mammals had displaced the marsupials over much of the earth.

But what if a barrier appeared before the expanding placentals could invade a particular area that had been occupied only by marsupials?  For example, what if a peninsula that had been occupied only by marsupials, became an island before the new placentals migrated there?  Well, Descent with Modification would predict that the isolated marsupials might not only survive, but fill many, if not all, of the same ecological niches that placental mammals occupy elsewhere in the world.  In other words, they would evolve many analogies to placentals, and only in one place: their isolated island.

Of course, this is exactly what we see in Australia.  In the table below10, keep in mind that all of the animals in the Marsupial column are more closely related to each other than they are to their counterparts in the other column.  This is an extremely telling observation; it really should make you say, “Wow!”

Consider that the Tiger cat is more closely related to the marsupial mouse than it is to the Bob Cat, which looks superficially almost the same.  The same can be said about the Tasmanian Wolf, which looks almost identical to a “regular” wolf, but is also in fact a closer relative to the Marsupial mouse, who for all the world looks like a “regular” mouse.

 

Placental Marsupial
Wolf Tasmanian Wolf
Flying Squirrel Flying Phalanger
Mouse Marsupial Mouse
Mole Marsupial Mole
Anteater Numbat
Bob Cat Tasmanian Tiger Cat
Lemur Spotted Cuscus

Keep in mind that all these marsupial species exist in only one part of the world.  Fascinating to be sure, though this is not only explained by Descent with Modification, it is practically expected.  Moreover, it adds yet another independent cross-check of the tree you get based only on the comparative anatomy of marsupials and placentals, which, in turn, is independently cross-checked by the tree drawn only from the layer positions of fossils, which is cross-checked by the tree based on biochemistry, etc., etc.

On the other hand, this is not only completely inexplicable under the creationist “model,” but it actually falsifies that “model.”  What can the creationist say about such a pattern in biogeography?  All they can say is that God created parallel versions of each of these animals (which alone contradicts “similar structures for similar functions”), that they left the Ark at the same time from Mt. Ararat and that somehow the marsupial mouse, Tasmanian wolf, Tiger Cat and the many, many other marsupial species (not shown in the table) that exist only in Australia all cooperated as a group to get to get to Australia ahead of all placental mammals.  As Philip Kitcher puts it,

Some marsupials—wombats, koalas, and marsupial moles, for example—move very slowly.  Koalas are sedentary animals, and it is difficult to coax them out of the eucalyptus trees on which they feed…The idea of any of these animals engaging in a hectic dash around the globe is patently absurd (On the evolutionary account, of course, they are all descendants of ancestral marsupials who had millions of years to reach their destinations)11

If they all started at the same time in the same place, as the creationists claim, what was it about their lack of a placenta that made them move as a group, predator and prey, large and small ahead of very fast placental predators to just this one part of the globe?  Without a direct Divine assist, it’s hard to imagine a coherent explanation.

Conclusion

Let’s think back to the original example of the court case that we discussed at the very beginning.  Can analog watches be wrong? Of course.  Can certain fossils be misidentified, or identified as coming from the wrong layers? Of course.  Can digital watches make mistakes? Definitely.  Are animals sometimes misclassified based on their anatomies? Definitely.  What about the timing of the 5 o’clock news—is it infallible? Definitely not.  What about the reading of DNA sequences—is it infallible? Definitely not.

But just as in the court case, such criticisms miss the whole point.  In the court case example, we don’t believe the suspect is guilty just because of what someone’s watch said, or just because someone heard the 5 o’clock news coming on at 5 o’clock, or just because someone heard the 5 o’clock whistle blow.  We believe it, because what someone’s watch said was the same thing as what the timing of the 5 o’clock news was telling us, which was the same thing as what the timing of the 5 o’clock whistle was telling us.  In other words, we believe because of the agreement between multiple independent sources (not to mention the agreement between multiple samples from the same source—e.g., many watches of different types).

When you find yourself talking about highly technical minutia regarding some particular measurement or method, remember that one particular measurement or one particular method is not why scientists believe that evolution is true.  The critic of Evolution has to show not how a measurement or method may be wrong (we all know that), but how all of the thousands of different measurements using many different independent methods come up with the same wrong answer.  In the court case, ask yourself what the odds are that all those analog and digital watches were all broken in different ways, but still all said 5 o’clock at the same time; and further that this matched the mis-scheduling of the 5 o’clock news, which in turn, coincided with the mis-scheduling of the 5 o’clock whistle.  This conspiracy of errors would have to ensure that as a group they all agreed it was 5:00 PM when it was really, say, 2:02 PM.

It is in fact likely that errors will be made, precisely for the reasons creationists give: these techniques are not perfect.  However, if the prosecution is to have a convincing case, errors should appear as a couple of watches that said it was 4:35 or 5:20, with one perhaps saying it was 11:00 AM, but with the overwhelming majority of independent measurements and methods showing tight agreement around 5 o’clock, plus or minus a minute or two.  Naturally that would be extremely convincing, and the errors would be recognized as statistical outliers—due precisely to the known fallibility of individual measurements.  This is precisely why science doesn’t consider any theory strong on a few data points, but only when there are a many data points and a good deal of independent corroboration.  Keep in mind that errors and unexpected results are reported with the rest of the data.  This is how science accounts for the fallibility of measurements, and the imperfections of individual scientists.

So it is with evolution.  To the creationist one has to ask: How did all the possible errors that could happen in any separate case not only did happen, but conspired together so that as a group they would have tight agreement around the same wrong answer?  That is what we mean by independent corroboration; that is what we mean when we say that a theory is well supported by the evidence; and that is what the critic needs to explain.  Indeed, his or her alternative theory must not only explain the same phenomena, but must account for that agreement, and not simply point to the obvious fact that mistakes can be made, or that some questions remain, as they do in every field of science.


1 Arthur N. Strahler, Science and Earth History: The Evolution / Creation Controversy (Buffalo: Prometheus Books, 1987), p. 108.

2 Adapted from Joel Cracraft, “Systematics, Comparative Biology, and the Case Against Creationism,” in Scientists Confront Creationism, ed. Laurie R. Godfrey (New York: W. W. Norton & Company, 1983), p. 171.

3 Adapted from Strahler, p. 352

4 Edward E. Max, “Plagiarized Errors and Molecular Genetics,” Creation/Evolution XIX (1986) , p. 34.  Reprinted and updated 7/12/99 in TalkOrigins.

5 Ibid.

6 Kawaguchi, American Journal of Human Genetics 50:766-80 (1992), cited in Max.

7 E.J. Ckollar and C. Fisher, Science 207:993 (1980) cited cited in Douglas J. Futuyma, Science on Trial: The Case for Evolution (Sunderland: Sinaur Associates, 1995), p. 48.

8 Stephen Jay Gould, “Evolution as Fact and Theory,” in Science and Creationism, ed. Ashley Montagu (New York: Oxford University Press, 1984), p. 122.; see also, Futuyma, p. 189.

9 Quoted in Futuyma, p. 49.

10 Adapted from Tim M. Berra, Evolution and the Myth of Creationism (Stanford: Stanford University Press, 1999), Fig. 16.

11 Philip Kitcher, Abusing Science: The Case Against Creationism  (Cambridge: The MIT Press, 1993), pp. 141.

An Introduction to the Evolution versus Creation Debate

The evolution / creation debate hinges largely on a disagreement regarding the nature of science and scientific theories. Before getting into that, however, it will be important to address the common misconception that evolution and atheism are somehow two sides of the same coin. After that we will define the critical terms in this debate and summarize the basics of evolutionary theory.

The Relationship of Atheism to Evolutionary Theory.

In short, there isn’t one. Science generally, and evolutionary theory specifically, do not disprove the supernatural God of Judeo-Christianity. Evolution does, of course, undermine some of the important arguments that have been put forward for the existence of God, such as Paley’s famous argument from design (best know for its use of an analogy between the complexity of living things and the complexity of a watch), but it certainly doesn’t undermine all arguments for the existence of God, and it never will, for the simple reason that science doesn’t address the existence of supernatural entities one way or the other.

Evolution is no more atheistic than is medicine. Practitioners in both fields exclude supernatural interventions from their explanations of the phenomena they investigate. For example, you wouldn’t expect your doctor to say, “We don’t need to research your disease because we believe it’s the result of a curse from God, so your only treatment is repentance.” Just because medicine excludes supernatural explanations as a matter of method, it does not follow that medicine is therefore committed to atheism. Medical doctors are not being inconsistent when they both believe in God, and practice medicine under the working assumption that God has not jumped in to manipulate natural laws in order to create a disease or other medical phenomena. Similarly, evolutionary science also excludes supernatural explanations as a matter of method, but again, this is not equivalent to saying that evolutionists are committed to atheism. What medicine and evolution (and all the sciences) are saying is that direct intervention by God, or other supernatural beings, is assumed to be unnecessary in explaining the phenomena they investigate.

If such supernatural explanations were allowed into the methodology of science, important problems would never be pursued. For example, scientists might simply have said that polio was God’s punishment for original sin; as a result, research into its supposed ‘natural’ causes is not only unnecessary, it is positively blasphemous. Regardless of whether or not such a belief is actually true, we have two choices: we can either walk away and try to pass laws banning such heretical research practices, or we can continue the research as if there were no spiritual or magical causes involved — as a methodology. History has shown that this naturalistic methodology is regularly rewarded with deeper insights into the workings of nature, insights that have moved the human race out of the Dark Ages. One may well believe that God is the ultimate originator of these laws, and even that He may intervene on occasion for His own purposes; but in order to advance any field of knowledge, one must proceed under the working assumption that God has not and is not now involved in the areas under study. Under this approach if we are wrong and the phenomena being investigated is supernatural in origin, then at worst our research will turn out to be a waste of time; but if we are right, and a natural cause does exist, then our human knowledge expands — we go from believing that diseases are the result of some angry deity to understanding that they are a form of predation from microorganisms; this is turn allows us to go from cowering and chanting in the face of these threats to actually controlling them.

Clearly then, the scientific method does not commit one to the belief that God does not exist, or even that God does not intervene in supernatural ways. What the scientific method does is define a methodology that allows science to move forward in all those areas in which God does not intervene, and it is effective in this only because it does not assume in advance what those areas of are. Rather than assuming that God has an explanatory role until proven otherwise, the scientific method turns this around and assumes that God has no explanatory role, until it can be proven that He has. This shifting of the burden of proof, this single change in perspective, was essential to unlocking the door leading out of the Dark Ages.

Interestingly, this methodology itself amounts to a kind of experimental test of God’s explanatory role in nature. Rather than finding our scientific endeavors regularly frustrated because so many phenomena are caused by supernatural agents, we instead find that this, in fact, never happens. Even when phenomena cannot yet be explained, natural explanations can be identified that are at least as plausible as any supposed supernatural “explanation.” (This is why theists who regularly point scientific unknowns as “proof” of God’s existence find themselves on ever shrinking ground. Such arguments are often called God of the Gaps arguments, where “gaps” refers to the current gaps in our scientific knowledge.) Science has yet to regret its naturalistic working assumption, and this fact does amount to a powerful inductive argument that God simply does not have an explanatory role in the workings of the universe. However, this does not prove that God does not exist.

Questions of whether or not God, or other supernatural being(s), exists are simply not within the scope of science, but of philosophy. Science’s commitment is to a naturalistic methodology, not a naturalistic ontology (i.e., to a commitment that nature is all that exists). Many scientists do hold to this ontological view, but many scientists do not, and they cannot be accused of being inconsistent as a result. Belief in God in not incompatible with a commitment to science’s naturalistic methodology. Having this point explicitly made is, I think, something the many theistic and Christian evolutionary scientists well deserve.1

A Brief Description of Modern Evolutionary Theory.

Some common misconceptions of the theory of evolution are typified by such questions as “If man evolved from the apes, then why are there still apes?” Questions like this point to the importance of first ensuring that your opponent has at least a basic understanding of evolutionary theory. The theory of evolution is not, as is commonly assumed, equivalent to Darwinism. The theory of evolution is an interrelated set of now well-confirmed hypotheses including descent with modification (i.e., all life being related through common descent), along with natural selection, genetic drift, and genetics as the mechanisms behind the evolutionary process itself.

The descent with modification component of evolutionary theory asserts that all life forms can trace their lineages back to earlier classes of life forms in a branching, “nested” hierarchy (forming what looks like a bush or tree), which can ultimately be traced back to the beginning of life on earth (a point that would itself likely have been the culmination of a long period during which the distinction between living and non-living matter would have been difficult to make)2. During the long period since that time, changes in body plans have accumulated in diverse directions to make all the differences we now see between all life forms. To continue the tree analogy, you can start from the tip of any arbitrarily chosen twig and follow it back to the point where it joins another twig, the “common ancestor” of the two (or more) twigs. That now thicker twig can then be traced back to where it joins another thicker twig. This process can be repeated until you ultimately reach the thickest twig of all: the root of the bush. Applied to evolution, each twig represents a particular lineage; the points where there is a joining of those twigs into a thicker one represents the common ancestor of all the lineages that can be traced back to that point. Note that this descent with modification hypothesis can be tested independently of any ideas about how it happened, or the mechanism behind it. The mechanism introduced by Darwin was that of natural selection (along with the additional hypothesis that natural selection proceeded in a gradual, steady fashion).

To understand natural selection, we start by recognizing that individuals within any given breeding population are not identical. They differ from one another in slight and not-so-slight ways. If any of these various characteristics even slightly increases the odds that the individuals possessing them will survive long enough to reproduce, then the number of those individuals possessing those traits will tend to increase after each generation. This is so simply because more parents with that trait are having children than are parents without that trait. Over time, those traits will then become the “norm” for the population. As new traits and enhancements to existing traits continue to emerge, they will be similarly “selected for,” and then also become the norm for the group. This means improvements will accumulate. Alternatively, traits that lessen the relative odds of an individual’s reproducing will tend to be “selected against.”

What does it mean to say a trait is “beneficial”? In the context of evolutionary theory it means nothing more than saying it increases the odds that the individual possessing the trait will successfully reproduce. Whether or not a particular trait will help or hinder that depends completely on the environment of that individual, where environment includes such factors as competition within and between species, predators and disease, available niches, and of course, the current traits already common in the population.

Therefore, some physical trait—height, say—might be beneficial in one environment, but harmful in another.

If a particular interbreeding population, say, a species of small rodent, splits into two because of continental separation or a change in the course of a large river, for example, then each of the groups will be subject to different selection pressures (since they are in now different environments). Since the two groups are no longer interbreeding, traits increasing in frequency in one group cannot be passed to the other group, and visa versa. Over a long period of time the cumulative effect of this will lead first to the appearance of different varieties (e.g., “races”), then to entirely different species. Note how the natural selection hypothesis mentions nothing about the possible mechanisms behind it, such as genetics, and can be tested independently of them. In fact, genetics was unknown to Darwin and was not applied to his theory until well after his death. Regardless, Darwin was still able to muster overwhelming evidence in support of his hypothesis.

The discovery of genetics and its application to Darwin’s ideas of natural selection resulted in what is now referred to as neo-Darwinism or the Modern Synthesis. With the discovery of genetics, we can now understand the mechanism responsible both for the naturally occurring variability within populations, and for the ability of beneficial (and neutral) traits to be preserved while harmful traits are reduced. We now understand that this variability is the result, during reproduction, of both random recombination of genetic information, and random copying errors (including the phenomenon of gene duplication, which actually creates additional genetic material upon which selection can operate). In addition, genetics gives us the means to solve apparent counter-examples to the natural selection hypothesis, such as the persistence of Sickle Cell Anemia, despite the fact that this disease is clearly not beneficial to the people suffering from it.3 Our understanding of genetics has also allowed the theory of evolution to be extended to include the effects of random genetic drift.

All of these interrelated hypotheses are properly considered part of the theory of evolution. This understanding should make clear a number of debate-related points. First, attacks on Darwinism are not necessarily equivalent to attacks on common descent. Darwin had some additional hypotheses that are not central to evolutionary theory. For example, he felt that evolution proceeded in a slow, gradual manner (a view referred to as “gradualism”). A prediction of this hypothesis is that there should be no “explosions” in the fossil record, and that gaps in the fossil record should be the result only of the lack of opportunity for fossilization itself, and not a rarity of transitional forms. In light of the preceding outline it should now be clear that, however valid this criticism, it is not the same thing as a criticism of common descent nor even of the mechanism of natural selection; it is instead a criticism of the mode or tempo of the mechanism behind common descent.

The role of natural selection as the mechanism behind common descent is now well established as an important one. This was not always the case, however. Lamarckism was a serious contender both during Darwin’s time and in the early part of this century. Essentially, Lamarckism is the idea that traits acquired during one’s lifetime could be passed on (at least to a degree). For example, a Lamarckian explanation for the giraffe’s long neck would be that the effects of the parent’s constant straining to reach ever higher leaves would be passed on as a slightly longer neck, which would accumulate over the generations. This idea has long since been discredited (via the scientific method described below). While the role of natural selection in evolution is now known to be an important one, debate as to the relative importance of additional factors, such as genetic drift—the accumulation in random directions of neutral (i.e., neither harmful nor helpful) mutations—continues. Debate also continues as to the mode and tempo of evolution, which some argue is hardly a disagreement at all. In particular, “punctuated equilibrium” disputes neither common descent, nor natural selection, but emphasizes only that evolution often, but not always, proceeds in fits and starts; that is, evolution is characterized by relatively long periods of little change followed relatively short periods of rapid change. Importantly, this idea predicts a rarity and not an absence of transitional forms, and examples of transitional forms are many).

With this very brief introduction to evolutionary theory a number of important points can now be understood. First, evolution is a blind, unconscious mathematical property of any system of things that make imperfect (those very close) copies of themselves in an environment where the quality of the copy affects the copying rate; consequently, it can be rather easily simulated on a computer. This technique is now used in academia and industry, and is referred to variously as genetic algorithms (GA’s), genetic programming, and evolutionary programming. For example, in GA’s the ideal design of an airplane wing isn’t created by a designer, but is instead “evolved” through random mutation, recombination and selection. The results are often completely unanticipated by the creators of the GA. Such applications of evolutionary theory provide a powerful demonstration of the important roll of “chance” in the emergence of complexity and order.

Second, there is no target design toward which evolution is somehow striving—evolution has absolutely no foresight or goals (this is easily the single greatest popular misconception), which also means that there is no progressive drive toward increased complexity or intelligence. In fact, there are many examples of simple species evolving from more complex ones. Offspring either survive or they do not. Period. There is no evolutionary “planning,” and evolution cannot “see” beyond that one step: the differential survival of characteristics in one generation.

In fact, we would expect such a process to regularly lead to less-than-ideal design solutions, which is precisely what we find in nature, and in ourselves. This is due to the fact that since evolution has no foresight, it can’t go backwards to do a “top-down” redesign in order to optimally adapt to a new environment. For example, when the Panda bear adapted to a niche of eating bamboo, it couldn’t undo millions of years of carnivore evolution and evolve an ideal bamboo-eating body; instead, its wrist bone was adapted (you’ll sometimes see the term “exapted,” coined by Steven Jay Gould, to describe this kind of jury-rigging) to serve as the “thumb” it now needs to get at its new food source. What’s more is that the Panda’s carnivorous digestive tract is extremely inefficient in extracting nutrients from its new diet, so Pandas have to go through a whole lot of bamboo to get their required nutrition (be careful where you step). This basic evolutionary concept immediately puts to rest the common creationism critique that partially evolved characteristics have no use. You might hear, for example, “What good is half a wing to an animal that is waiting for a full wing?” The answer, of course, is that before it was flight worthy, and with no “idea” that it would ever be used for flight, the pre-wing structure was either used for something else entirely, or was a genetic side effect of an unrelated feature that was being used (and therefore being selected for). For example, just as the Panda’s thumb was originally a fully functional wrist bone, early feathers may well have been a functional insulation device. This creationist critique reflects only a serious misunderstanding of basic evolutionary theory: no evolutionist ever thought that at some point in history half of a modern wing was once sticking uselessly out of some poor animal’s body. Indeed, evidence of such a process would undermine, and not support, evolutionary theory.

Now, as for why there are still apes if we evolved from apes, the answer is simply that we did not evolve from today’s apes (keep in mind the tree/bush analogy), but with them share a common ancestor that was different from either humans or apes. Generally, the more similar the species the more recent is their common ancestor; the less similar, the further back is their common ancestor.

Good vs. Not-so-good Science

Before getting into questions of evidence it is extremely important to first review the nature of scientific theories and methodology. First, let’s define some terms that often lead to confusion. “Hypothesis” can be thought of as an educated guess which has yet to be confirmed through testing. An example would be what you do when the light in your room suddenly goes off. You might hypothesize as follows: “I suspect that a fuse blew, since the electric clock went off too.” Now, this hypothesis is still subject to testing. You may test it by seeing if the whole house is without power, or by seeing if your street lights are on. In the process you’re throwing out falsified hypotheses and forming and testing new ones. Once the various predictions of your hypothesis become consistently confirmed, you have what might now be a “confirmed-hypothesis.” An interrelated set of such confirmed hypotheses, models, and directly observed facts can form a “Theory,” when collectively they provide a systematically organized body of information that can be used to effectively explain and predict real world happenings. The Theory of the Atom is a clear example, as is the Theory of Evolution.

In everyday usage, “theory” is often used interchangeably with “unconfirmed hypothesis,” or even “wild guess.” It is in this everyday sense of the word that creationists will often complain that evolution is just “a theory.” But scientists also refer to the Theory of Electromagnetism, Gravitation Theory, and the Germ Theory of Disease. When they use the term “theory, it is in the “web of interrelated confirmed hypotheses” sense of the word. In the creationists’ usage of the word, it would be just as legitimate to say the “Theory of Alien Telepathy” as it is to say the “Theory of the Atom,” but of course, Alien Telepathy is not a theory at all in the sense in which scientists use the term.

Are theories ever absolutely certain? No. But theories are not equally uncertain. For example, one could claim that while germs cause disease under the microscope, no one has ever observed germs making people sick in the human body, and no one has actually seen the electrons orbiting the nucleus of an atom . (This is similar to the creationist complaint that no one can go back millions of years and observe evolution occurring.) Science is not simply about reporting what we directly observe. In fact, the whole point and value of science is to use what we can see to tell us about what we cannot see.

If theories and their hypotheses are never technically certain, how do we know that the Theory of the Atom is any stronger than the Theory of Alien Telepathy? Basically, a theory is strong if it does not contradict itself, and if it makes testable (typically unexpected) predictions that could easily falsify the theory—but, in fact, do not. Importantly, a good theory deals with its serious counterexamples (apparent evidence against the theory) through independently testable “excuses”; that is, the excuses it makes for the counterexamples can be shown to be valid without relying on the theory itself. An example would be how Newton’s failure to explain the orbit of Uranus led not to the rejection of Newton’s theory but to the independently testable “excuse” that there was an as yet undiscovered planet, Neptune, the existence of which was independently verifiable with a telescope. Instead of being a problem for Newton, The Neptune solution provided powerful independent confirmation of his theory.4 A good theory should also show that what had looked like unrelated phenomena are actually related, and it should cause us to ask new questions that we never would have thought to ask without the theory’s insights—questions that lead to even more confirmed predictions. The strongest theories spawn new productive disciplines that further confirm the theory that gave rise to them, while spawning new well confirmed theories of their own.

The importance of independent confirmation cannot be overstated. Independently confirmed predictions create a mutually reinforced “web” of confirmations that cannot be dismissed simply by casting doubt on any one confirming test. For example, when the prosecution in a courtroom presents not just one witness, but a parade of witnesses none of whom even know each other, each corroborating the same event from different, independent vantage points, they create a very powerful case. What makes it powerful is not just that the defense has to create reasonable doubt in more than one witness’ testimony. The burden on the defense is much bigger than this, much bigger than merely showing that each witness might be wrong. It is even bigger than showing that they are all wrong. The problem is making it reasonable to suppose that they all independently came up not just with a wrong answer, but with the same wrong answer—independently of each other. The odds of such event would be astronomically small.

Now even when all these criteria are met by the best scientific theories in history, those theories are never absolutely certain. In fact, it is possible, though perhaps extremely unlikely, that such a theory will be completely overturned. Far more likely, however, is that a theory with such a success record will be enhanced, built upon, perhaps even being shown to be a special case of a more general theory (just as Newton’s theories were when Einstein came along), but not thrown out. In fact, the history of science is just such a history: not a parade of theories that get overturned only to be replaced by the next generation of equally doomed theories, but a story of good theories being built upon, a story of real progress, a kind of “nested hierarchy” of its own—even despite frequent and ferocious resistance from various religious quarters.

In short, good theories work; they add real value and real insight, all of which produce real results. Given this understanding of what makes a theory good a special problem for any challenger theory should now be apparent. With the earlier courtroom example in mind, if a challenger points out that the dominant theory is unable to answer some question (which is typical of even the most successful theories), or that there is some currently unexplained counterexample, the challenger should not expect more than a ” yea, so?” unless she can show—and this is important—how her new theory not only fills in these gaps in verifiable ways, but how it can, in addition, better explain the vast body of successful predictions, independent corroboration, and explanatory power of the dominant theory. And beyond that the challenger theory has to make clear how all the mutually reinforcing independent confirmations of the dominant theory were coincidental errors on an apparently vast scale.

It is with this understanding of the nature of science, scientific theories, and evolution in particular, that we need to evaluate the arguments and the evidence for creationism and evolutionary theory.


1 The preceding subsection draws much from Robert Pennock’s very easily understood and thorough untangling of this methodology / ontology confusion, which is shared by those on both sides of the debate. Pennock also makes the important point that much of the fear that appears to motivate creationists comes from this confusion, as well as from an unfounded view that morality can be meaningful only if it comes from a supernatural being. See Robert T. Pennock’s, Tower of Babel: The Evidence Against the New Creationism (Cambridge: The MIT Press, 1999).

2 In a personal correspondence, Vincent M. Wales provided some helpful suggestions on a earlier draft of this article, including that of emphasizing the fact that evolution does not specifically require a single, common ancestor. Building on Wales’ suggestion, I would also hasten to add that the theory of evolution does not specifically address the origin of life at all, only its subsequent development. However, this is not to say that its principles are not a key part of the work being done in origin of life research, nor to say that the evidence supporting evolution is not highly suggestive of there having been just one common ancestor.

3 Genetics provides the insight by showing that the disease results from the presence of a recessive gene, and that a recessive “carrier” of the Sickle Cell gene is at an advantage in environments where malaria is present. “Recessive” in this case means that if you got the gene from only one of your parents, you will not exhibit the Sickle Cell symptoms, but you will be highly resistant to malaria; however, if you got the gene from both parents, you will suffer the full effects of the disease. With this insight, the puzzle is solved because the beneficial effects of the Sickle Cell gene in recessive carriers is selected for. Of course, this applies only when the population is being exposed to malaria. This neo-Darwinian model explains not only the disease’s persistence, but its persistence only in populations that are exposed to malaria.

4 I take this example, as well as much of these methodology of science concepts from Philip Kitcher’s, Abusing Science: The Case Against Creationism (Cambridge: The MIT Press, 1993).

Understanding Reason and Faith

The debate between faith and reason is in many ways the decisive battleground in the debate between theism and atheism. This is because most defenses of theism appeal to the inadequacy of reason. Typically these defenses will take the form of claiming that there are appropriate spheres for reason, and appropriate spheres for faith, and that belief in God comes from recognizing the appropriate role for faith and the associated “limitation” of reason. Some theists argue that one can believe in God using both faith and reason. Once again, we should define our terms.1Faith means that one considers a particular claim (e.g., “God exists”) to be actual knowledge, absolutely certain knowledge. This claim to certainty is held in the absence of adequate evidence, or in direct contradiction to the evidence. Evidence is considered relevant only in so far as it supports the proposition; and irrelevant or inadequate to the extent that it does not support the proposition.

“Faith” has multiple usages, and often in debates the meaning shifts. For example, a theist might state that an atheist has “faith” too. For example, the atheist has “faith” that the sun will come up tomorrow or that the airplane one is about to get into won’t go down in flames. Clearly, this is not the same sense of the word that theists use when they say that they have “faith” that God exists. For example, one can be virtually certain that the sun will come up tomorrow, and this comes from evidence analogous to a repeatable experiment: everyday the sun has come up. Of course, it is not certain; an unanticipated event like the sun exploding could force us to revisit our expectations. The airplane example is yet another case of reasonable expectations based on historical evidence, and the (fortunately rare) exceptions are clear reasons why we can never be truly certain when boarding a plane that it in fact won’t go down. The theist, however, is absolutely certain that God exists, absolutely certain that no future evidence will appear that would change his or her mind.

“Reason” means the application of logical principles to the available evidence. While the principles of reason / logic are certain, the conclusions one obtains from them are only as certain as the underlying assumptions, which is why science is rarely, if ever, absolutely certain (though in many cases, its theories are certain to a very high degree of probability). In fact, scientific theories are rarely “deduced,” but are, instead, “inferred”; that is, they are based on inductive logic, or generalizing from specific examples. The “inferred” theory, if it is any good, will make independently testable predictions, and will explain a range of phenomena that had seemed unrelated before. When multiple, independent tests corroborate a theory, it can, just from a statistical standpoint, become virtually certain. 2

The critical point here is that while almost nothing is certain, everything is not equally uncertain. Our theories can be ranked by the evidence supporting them, and our degree of “belief” should be similarly ranked; that is, we “believe” in proportion to the evidence—all the way from “completely unsubstantiated” to “some possibility” to “virtually certain.” Compare, for example, the theory that leprechauns really do exist with the Germ Theory of Disease. Neither one is certain, but one is far closer to being certain than the other.

I stated that the principles of logic are “certain.” This touches on a particularly important part of the faith vs. reason debate. Often, the advocate of faith will say, “But you can’t prove the truth of logic, so you must have “faith” in it—just as I have faith in God.” This critique of reason brings to mind the story of the child who keeps asking “why?” to every answer offered by the parent. Of course, this infinite regress of cause and effect cannot go on forever. To understand when to stop asking “why?” is to begin to understand the nature of concepts. Concepts do not exist in a vacuum. With one class of exceptions, concepts derive their meaning from some immediately ancestral set of concepts and can retain their meaning only within that context. You hit “bedrock” when you reach the so-called axiomatic concepts, which are irreducible, primary facts of reality—our “percepts.” These percepts form the foundation upon which we build our concepts. How do you know when you’ve finally hit these primary facts of reality in the long string of why’s? You know—and this is critically important—when there is no way to deny them, or even to question them, without presupposing that they are, in fact, true. To deny them or to even question whether they are true is to literally utter a contradiction.

This “bedrock” test is very specific.  Let’s illustrate it with an example. Suppose I say, “Logic is an arbitrary human invention and could be wrong.” Well, if it is wrong, then the Law of Contradiction (a thing cannot be itself and its negation at the same time and in the same respect) and the related Law of Identity (a thing is itself) are wrong; but then that means the very words that make up my original claim, such as, “Logic is arbitrary” could mean “Logic is not arbitrary” or it could mean both at the same time and in the same respect. In fact, it could mean “I like chunky peanut butter.”  If all that sounds crazy and unintelligible, that’s because it is, as are all utterances when the truth of logical principles cannot be assumed.  The point here is that without the assumed truth of logic, language itself becomes impossible. So the contradiction is this: For my original statement to have any meaning at all, logic has to be true, but the content of my original statement questions that truth: a self-contradiction. Logic, then, is not accepted on “faith” but as a necessary, self-evident truth, something that is required to speak or think at all. The same can be shown for the concepts of existence, consciousness, and the reliability of our senses. Again, there is no way to talk about any of these things being possibly untrue without first requiring them (implicitly) to be necessarily true.

In life one is exposed to claim after claim (Aliens, Heaven’s Gate, Pyramid Power, ESP, etc). What criteria should we apply to separate claims that correspond better with reality from others that do not? To use an earlier example, how do we decide that the Leprechaun theory should not be taken just as seriously as the Germ Theory of Disease? The answer is that we know by applying the standard of reason. If faith were a viable alternative to reason, then what are its rules? How do we know when to apply it? How do we know when someone has misapplied it? How can we tell the difference between the effects of faith and the effects of inadvertent, though well-meaning, self-delusion? Indeed, how can we test its validity?

Let’s illustrate this problem.  A member of Christian sect X believes that all other sects are damned, and she says that she knows this through faith. The person she is talking to is a member of sect Y that believes only sect Y is the one true faith, and that all others are damned, including members of sect X—and, of course, she knows this through faith.   Clearly they both cannot be right. The member of sect Y asks the member of sect X how she knows that she is not really just hearing the deceitful voice of Satan leading her down a false path. To that our sect X member confidently replies, “I know that through faith as well.” Not surprisingly, these are the same answers given by the member of sect Y to exactly the same questions regarding her confidence in the truth of her favorite sect.  There is no independently validated method to resolve this. If reason is not the standard, then there literally is no standard, and people who abandon it have simply written themselves a blank check to believe whatever they choose. Cloaking this irrationalism with comfortable terms like “faith” does not make it any less irrational. As John A. T. Robinson once put it: “The only alternatives to thinking with reason are thinking unreasonably and not thinking.” 3


1 George H. Smith, Atheism: The Case Against God (Amherst: Prometheus Books, 1989), gives an excellent introduction to this critical subject. I draw from Smith both here and below, in my discussion of axiomatic concepts, and Smith is drawing from the Objectivist epistemology of Ayn Rand.

2 Philosopher of science, Philip Kitcher, in his Abusing Science: The Case Against Creationism (Cambridge: The MIT Press, 1993), from which I am drawing these points, gives an outstanding introduction to the methodology of science.

3 Quoted in Smith, op. cit.  p. 110.

Bible Accuracy

The question of Bible accuracy is important in many debates, particularly those regarding creationism and certain defenses of Christianity. Creationism is, in fact, an attempt to make science compatible with the fundamentalist requirement of Biblical literalism and infallibility. Christian theism typically defends its claim to truth by appealing to supposedly fulfilled Bible prophesies. Before tackling these issues, we need to understand the context.

A Brief History of the Bible

The Bible descends from what was an ever-changing and expanding body of written and oral traditions dating from as early as the 12th Century B.C. The reformulations and additions continued from then all the way up until the 4th Century A.D. when, out of a large collection of candidate books, some were selected to be part of what we now call the Bible. It is important to remember that literally none of the original manuscripts of either the Old or New Testaments has survived. The Bible was passed down by individual manual copying and translation right up to the discovery of printing in the 15th Century A.D. The oldest manuscript copies date from sometime during the first 3 Centuries A.D.The original language of the Old Testament was Hebrew followed by Aramaic translations appearing in the period following the Exile and then Greek translations following Alexander the Great. It was not until around the 2nd Century, A.D. that the contents of the Old Testament had become fixed.

The original language of the New Testament was Greek. As with the OT, no originals now exist, and the oldest of the manuscript copies dates from the 2nd Century, A.D. Before the NT was “canonized” into its current form, each of the early Christian communities apparently had a gospel of its own, in some ways redundant, in some ways in direct conflict, with the gospels of other communities. Some of these included the Gnostic Gospel of Thomas, the Gospel of Hebrews, the Gospel of the Ebionites, a Gospel of the Egyptians, an Apocalypse of Peter, an Apocalypse of Paul, and the Epistle of Barnabas, to name just a few.

What the Christians used as an “infallible” Bible was different depending on which Christian community you talked to, at least until the year A.D. 325. In that year, Emperor Constantine convened the Council of Nicea, which not only did the picking and choosing of the books, but also ended a power struggle in Christian circles as to the nature of Jesus. As Roman Emperor, Constantine decreed that the Trinitarian view would become Christian dogma (which is remarkable considering how weak his Christian credentials were), and this decree silenced the large Christian segment that said Jesus was only a man.

Of course, the history doesn’t end there. As the Bible was translated into Latin, Augustine ultimately complained of the “infinite variety” of Bible translations. Under the direction of Pope Damascus, Jerome attempted to standardize the Latin Bible. Drawing on Hebrew, Greek, and Latin, he completed the “Vulgate” by sometime around A.D. 405, which was ultimately recognized as the Standard Bible of the Roman Church (1546).

The first English Bible was completed in the late 1300’s by John Wyclif, an Oxford instructor in religion and philosophy. Condemned by the church, it lasted in the underground for some 150 years. Then, around 1524, William Tyndale, an Oxford and Cambridge educated linguist, who was influenced by Erasmus and Martin Luther, published a New Testament translation based on medieval Greek copies. Then, Mike Coverdale’s Bible appeared (~1535) based on his translation of German and Greek translations, as well as drawing from Tyndale’s work. John Rogers and Richard Taverner also published their particular translations (~1539) drawing from and adding to each other and to Tyndale’s work. All of this was eventually edited by Coverdale into the Great Bible, which the King approved. Separately, the Roman Catholic church created its first English Bible, the Douay version, which was based directly on the Latin Vulgate (~1609).

In 1604, King James I wanted a fresh start, and pulled together Oxford and Cambridge scholars, as well as Puritan and Episcopal priests. This large group used the Catholic Douay, Luther’s German translation, the available Hebrew and Greek copies, and, to a very large extent, Tyndale’s work, and created the King James Version (~1611). Language, of course, is a fluid thing. Just how fluid can be seen in just a few examples: In 1611 “allege” meant “prove,” “prevent” meant “precede,” and “reprove” meant “decide.” To cope with this, the English Revised Version came out by 1885, followed shortly by the American Standard Version.

Clarifying Infallibility

This long, circuitous history spanning some 3,000 years makes clear that the infallibility of even the oldest manuscript copy–let alone some remotely descended English Bible–requires divine inspiration all along the very, very long line of manual copying and translating. (Remember, this is all occurring before the advent of the printing press.) However, once one puts the stake the ground and says, for example, “The King James Version is infallible,” then one eliminates any appeals to “mistranslation” from the Hebrew or Greek. After all, that would be an obvious example of fallibility. On the other hand, if only the original, autograph manuscripts are infallible (none of which exist), then the whole line of copies from the oldest manuscript copies (like the Dead Sea Scrolls) to all of today’s many descendant versions are not infallible. It is, therefore, important to understand in which sense your opponent believes the Bible to be infallible. In the first sense, simple, plain-language contradictions and factual / scientific errors are all one needs to falsify the claim of Biblical infallibility. In the second sense, the notion of infallibility is simply irrelevant to any extant biblical sources or translations, since the original autograph manuscripts are not available.

Atheism, Agnosticism, and Burden of Proof

“How can you be an atheist if you cannot disprove the existence of God?” This all-too-common question is often related to a misunderstanding of the concept of burden of proof, of how that concept relates to belief, and of how both of these ideas relate to the definitions of agnosticism and atheism.

Burden of Proof

You have probably heard the term “burden of proof” used in courtroom settings, often in the context of a criminal trial where the accused is innocent until proven guilty. What this means, of course, is that the accused’s innocence is assumed to be true, unless someone can actually prove otherwise. In other words, the accused’s innocence is the default position. As a result, it is absolutely not required for the accused to prove his innocence; he has only to show that, based on the prosecution’s case, there is no good reason to believe in his guilt; that the arguments and evidence presented by the prosecution are either unreliable, or do not make his guilt any more likely than some alternative explanation. Simply put, the burden of proof is on the prosecution.

Importantly, when a jury returns a finding of “not guilty” they are not saying that they believe the suspect is innocent beyond a reasonable doubt. They may have a truckload of doubt about his innocence. Their finding means only that reasonable doubt exists as to the suspect’s guilt. If there is such reasonable doubt and the burden is on the prosecution, then the jury is ethically and rationally required to acquit.

Why do we put the burden of proof on the prosecution? Because otherwise the prosecution’s job would be much too easy. For example, imagine you had to prove your innocence against the charge that you have the supernatural ability to cause cancer in humans anywhere in the world, and that you actually use this ability for your own sadistic pleasure, which is behind all the world’s cases of previously unexplained cancers. To make their case the prosecution puts you on the stand and asks, “Well then, if you aren’t guilty, how do you explain all the cases of mysterious cancers?” Helplessly, you admit that you can’t, to which the prosecution replies with an accusing finger, “Ah ha!” With your fear and frustration mounting you ask, “What makes you think it’s me!?” The prosecution immediately points out that Analogyland is not a country like the US: here you are guilty until proven innocent, and so the prosecution does not have to explain or prove anything. On the contrary, it is you that has to do the explaining. Nonetheless, the prosecution is feeling a bit generous and they volunteer that they are charging you based on their own psychic powers, powers that give them direct knowledge of evil people like you. When you ask what evidence they have that such a psychic sense is even reliable, they angrily warn you that your attempts to confuse the court will not be tolerated. The prosecution then reminds you once again that the only party who must present evidence is you, and that if you cannot prove the prosecution wrong, you are guilty by default.

Needless to say, you would be doomed in such a situation. In fact, in any land where your guilt is the default assumption, you would be doomed to a guilty verdict whenever the charge against you was unprovable, and the list of unprovable charges is limited only by one’s imagination (e.g., you are a witch who magically eliminates all evidence against her, or the reincarnation of Hitler pretending to be a good person, or an evil deity whose pretense at being a normal person is just what one would expect, etc., etc.). As you can imagine, in such a land you could easily send anyone you didn’t like to prison by dreaming up some unprovable claim; and, of course, they could do the same to you. One doesn’t have to look too far into history to find examples of this, such as the famous Salem Witch Trials.

To clarify the issue even more let’s leave the courtroom setting all together and just look at some off-the-wall factual claims that someone who comes knocking at your door might make. Let’s say someone, who calls himself Contactee Bob, knocks on your door and claims that leprechauns are real, but they are so smart and powerful they know how to avoid most human contact. However, Bob explains, leprechauns do make direct telepathic contact with a few deserving humans. When these leprechauns reveal their existence through such telepathic communication, the humans on the receiving end experience it as direct, revealed knowledge of the leprechauns’ existence. Bob claims to be such a contactee and hopes that one day you will be too. “Think positive leprechaun thoughts,” says Bob, “and you will be contacted too!”

At the end of his speech you ask Bob how he knows there aren’t alternative explanations of his “revealed knowledge” experience that are at least as probable as his leprechaun explanation (such as an obvious psychological one). Bob has heard this before, of course, and he’s ready: “Okay, Mr. Skeptic, prove there are no leprechauns.” Bob has even studied the contactee apologetic literature and takes his objection a step further: “If you claim that there are no leprechauns, then that implies you are omniscient since you would have to have knowledge of all parts of the universe to know that there are no leprechauns anywhere in it.”

This last statement of Bob’s would actually be true if the burden of proof were on you to disprove his claim, rather than on him to prove it. Bob has shifted the burden of proof onto you, the innocent bystander being subjected to his claims. He is arguing that his claim must be considered true by default unless you can disprove it, while all he has to do is watch you flounder in the attempt. What Bob probably doesn’t realize, however, is that in order for him to consistently hold such an interpretation of burden of proof, he would have to believe everything that he couldn’t disprove. While his interpretation of burden of proof lets in his leprechauns, it does so at the price of letting in an unimaginably large army of other bizarre creatures from the depths of human imagination and mythology. Consider the implications of this. Just to amuse yourself you could dream up claims of your own and present them back to Bob, claims like, “there are undetectable trans-dimensional hyper-intelligent fish all named, curiously enough, Wanda,” “There is an invisible, 3-headed dragon named Morris Minor who is a Douglas Adams fan and is responsible for manipulating the weather in such a way that it appears to be almost, but not quite, entirely unpredictable,” etc., etc., and Bob, by his understanding of burden of proof, would have to believe them all simply because he could not disprove any of them. This is what follows when the concept of burden of proof is misapplied or ignored. Clearly, this way madness lies.

What is the common element in the courtroom and Contactee Bob examples? It is that in both cases the burden of proof is not on the person making the positive claim. Now, by “positive” claim I mean any truth claim, such as “X is true” or “X is false.” Notice that saying something is false is also a positive truth claim: you are claiming that the assertion “X is false” is a true statement (i.e., you positively disbelieve X). For example, the claim “leprechauns do exist” is just as positive a claim as is “leprechauns do not exist.” Each is a claim to truth, and the burden of proof properly lies with the person making either claim.

But wait a minute. Doesn’t this take us right back to where we started? Doesn’t this mean that if I lack belief in Bob’s claims, then I have a burden of proof on me to prove him wrong just like he argued? Well, that depends on what you mean by “lack belief.”

Absence of Belief vs. Disbelief

A new born baby lacks belief in quite a bit, including the concept of God. Obviously, however, this is not the same thing as saying the baby disbelieves these claims (i.e., believes that these claims are false); the baby is merely absent belief—due in this case to its lack of awareness of the very concepts in question. Similarly, an adult who has grown up deep in an Amazonian jungle, and who has never even heard of people existing outside the jungle (let alone of such outsiders’ religions) is certainly absent belief in the Judeo-Christian God; however, once again, one could not say that this Amazonian regards God with positive disbelief, only that he is absent belief—due in the case to his never having been introduced to the idea.

In each of these examples, the individuals lack awareness of the concept of the Judeo-Christian God. Clearly, it makes no sense to say that they believe the claim “God exists” is false. On the other hand, it also makes no sense to say that they believe the claim is true. As a result, we cannot call them theists, and, depending on how one defines the term, we may not be able to call them atheists either (we’ll explore this a bit more later). But we can say that they are absent belief.

Note that in the preceding examples it also makes no sense to say that these individuals believe that the probability that God exists is around 50%. This leads us to a third sense in which one could be said to be absent belief in a claim. This occurs when there is at least some evidence for a claim, but this evidence is offset by equally strong evidence for the falsity of the claim or for alternative explanations. In other words, on balance, the claim is seen as somewhere around 50% probable. In such a case one would be rationally justified in “suspending judgment,” in making no commitment at all. If we’re talking about the claim that the Judeo-Christian God exists, then, as in the prior cases, we cannot call a person holding this view a theist.

One thing should be clear from the preceding discussion. We all start out at some point in our lives from the position of absence of belief due to absence of information. Clearly there is no burden of proof on individuals in this “position” (really a non-position). They are, as it were, simply waiting for input. It would be irrational for such people to claim to believe one way or the other. However, while you start out in life by justifiably saying, “the burden of proof is on you Mr. Positive Atheist and Ms. Theist,” as soon as attempts are made to meet that burden, the ball is right back in your court. You will need to assess the arguments and then assess the probabilities that the claim is true or false. Importantly, as the last of the earlier examples demonstrates, absence of belief may be the result of just such a reassessment: it may indicate not the default starting position of ignorance, but a reasoned, defensible conclusion that the odds of the claim’s being true is around 50% (not a view that I happen to share, by the way). In this special case, one is making a positive claim that the arguments and evidence support even odds.

Believing in Proportion to the Evidence

At this point all we have said is that one believes, disbelieves, or is absent belief in some claim. We’ve also suggested that this latter state is due either to ignorance of the claim (no burden of proof) or to an assessment that the probability of its being true is around 50% (carries a burden of proof).

This notion of probabilities of truth is much closer to the way science actually works than it is to the simplistic notions of absolute proof and absolute disproof, with everything in between treated as “unknown,” or “speculative.” Looking at truth values in this probabilistic fashion suggests a continuum; and assuming we believe only when we are rationally justified to do so, our “beliefs” cannot simply be “on” or “off.” Instead, our beliefs are a function of the probability that they correspond to factually true statements: the higher the probability, the more we believe; the lower the probability, the less we believe; and the lower still, the more we disbelieve.

So, what affects the probability that a claim is true? Certainly direct evidence, both for and against, directly affects it, including such things as the claim’s predictive success and explanatory power; but it is important to recognize that background knowledge is also an extremely relevant form of evidence. By “background knowledge” we mean the entire database of well-confirmed human knowledge.

Some claims, if true, would require that we throw out some of this well-confirmed background knowledge. To be sure, having to throw out cherished parts of our background knowledge is not unknown in history, but it occurred only when the evidence behind such a revolution was truly overwhelming. For example, if some theory is extremely well supported (say, at better than 99.9% as in the case of the Theory of the Atom), then any new claim that entails throwing out this part of our background knowledge needs to be supported by even better evidence than that supporting the existing theory. If weak or no evidence is offered, then should we remain absent belief in this new claim, as if the probability were around 50%? Definitely not. Why? Because the evidence against the claim is the evidence we already have for the background knowledge, which means the probability that the claim is true, in the absence of strong evidence, is extremely low.

There are also claims that, while not contradicting background knowledge, are completely unconnected to it, or go well beyond it in some radical way. But here again, the fact that a claim is “out of nowhere” does not make it false, though it does raise the evidentiary bar. Examples of this include certain notions of “supernatural” (specifically those that do not entail a logical contradiction as does the Judeo-Christian God). For example, while it doesn’t directly contradict our background knowledge, a claim that an intelligent, extraordinarily powerful alien somehow “started” the world 5 billion years ago is certainly a claim that is not in any sense a plausible extension of our background knowledge; it resolves no mysteries (other than replacing current mysteries with even bigger ones); and it lacks even suggestive support either directly, or indirectly from our background knowledge.

Now this latter situation is particularly interesting for a number of reasons: the claim is logically possible, it has no negative evidence against it, and it conflicts with no part of our background knowledge. On the other hand, it is has no evidence for it, not even the mild suggestive support that comes from its being plausibly related to, or extended from, our background knowledge. The question then is should we consider such an “out of nowhere” claim as having a 50% probability of being true? Should we say that it is as likely to be true as not? Definitely not. Why? Because the only thing such a claim has going for it is mere logical possibility. However, the list of the logically possible is effectively infinite. Since there is good reason to believe that the set of all factually true statements is vanishingly small compared with the set of all logically possible statements, then the odds are vanishingly small that some “out of the blue” logically possible statement would actually correspond to a fact of reality. Without some evidence supporting a claim there is no reason to believe that the odds favoring its factual truth are different than the odds favoring the factual truth of a randomly constructed logically possible statement, and the odds of that are effectively zero. Therefore, we have to conclude that an entirely unfounded claim—even in the absence of negative evidence—deserves not simply suspension of belief, but actual disbelief.

Certainty

Of course, if a claim is not even logically possible, then all this talk about probabilities becomes moot. In such a case I can be absolutely certain (100% probability) that a claim is false—without any evidence at all. A claim is logically impossible when it is self-contradictory (also called incoherent). For example, if you claim that there exists a married bachelor or a triangular square, then the probability that you are wrong is 100%. Of course, these examples are obviously incoherent; however, many claims are no less incoherent, though this fact is far less obvious. That’s where careful analysis comes in. When such analysis shows that some claim entails a logical contradiction, then one can be sure that the claim is false—again, without introducing evidence at all. This kind of evidence-independent argument is referred to as an a priori argument: it is based on logic alone.

Belief vs. Knowledge

So far, we have been talking about rationally justified beliefs. To be sure, beliefs are often held without rational justification—even without the pretense of rational justification, such as when people talk about believing something “on faith.” This subject is dealt with elsewhere1, so I won’t repeat it here beyond noting that faith is not a justification, but an admission that there is no justification. If there are rational reasons to believe something, then “faith” is superfluous. If there are no rational reasons to believe something, then continued belief is, by definition, irrational. Clothing irrationalism with the more comfortable term “faith” changes nothing. Indeed, free of the restrictions of rational justification, I could have faith that the tooth fairy is real, or any of the world’s sundry gods, goddesses, wood nymphs, and other human imaginings. The problem is that there are no independently validated rules that govern the use of faith, which makes it a kind of blank check. As a result, faith allows in God—along with any and everything else. Reason is the only validated means we have to qualify a belief as actual knowledge, the only means we have to separate the “wheat from the chaff.”

Meaninglessness

Some so-called claims are not really claims at all. Claims made with meaningless terms amount to nothing more than a kind of noise. Examples include, “xonipboo loves you” and “myrssla exists.” There not only is no actual evidence for or against such statements, there is no conceivable evidence that would count for or against them; consequently, they are factually meaningless. Since such a claim asserts nothing, then it means nothing to say one believes that a factually meaningless claim is either true or false—to any degree at all. The subject of such statements does not even have a referent. If I say I believe in “it” (or don’t believe in “it”), then I literally cannot know what I am talking about—I literally have no idea just what it is that I think I believe (or disbelieve) in.

Defining Atheism and Agnosticism

Pulling all these ideas together, we can now make what I hope are more useful and more accurate definitions of atheism and agnosticism.

At the most literal level, atheism can be analyzed as follows: “a” means without, and “theism” means belief in God. So, literally “atheism” means without belief in God. “Theism” of course, means with belief in God. Now, one either believes in the Judeo-Christian God or one does not. If one does, then one is a theist; if one does not, then one is an atheist. Technically, then, the terms “theism,” and “atheism” are mutually exclusive and jointly exhaustive—there is no third alternative. Even the case of absence of belief falls under atheism, since absence of belief means absence of belief in God—that is, without belief in God. Since such persons are clearly not theists, they are (literally speaking) a-theists.

Of course, while literally true this definition does not really capture the typical modern usage of the term “atheism.” Indeed, under this definition all newborns would be considered atheists, which seems inconsistent with the way people usually understand the term. The most simplistic understanding of the terms “theism” and “atheism” in common usage is that atheists are absolutely sure that there is no God, while theists are absolutely sure that there is. Everyone else would be considered some flavor of agnostic. Interestingly, “agnosticism” once referred only to the view that some particular claim is unknowable. Under this definition many believers would be, in fact, agnostics, or more specifically, agnostic theists; they agree that God’s existence is unknowable, but they believe anyway on faith, while agnostic atheists reject faith as invalid, and so remain absent belief. So, under this view “agnosticism” refers only to one’s views about knowability, and not to one’s beliefs. However, this usage of the term “agnosticism” also doesn’t seem to reflect typical modern usage. Currently, the term seems to refer to the view that one neither accepts nor rejects God’s existence because God’s existence is unknowable. There is also the more general term, “negative atheism,” which refers only to the first part of that definition: that one can neither affirm nor deny God’s existence—for whatever reason. However, “agnosticism” is sometimes used in that more general sense as well, in which case it is synonymous with “negative atheism” (I use “agnosticism” in this latter sense—as synonymous with negative atheism). Negative atheism is the position that one should not believe, not that one should disbelieve. Positive atheism, on the other hand, argues that one should positively disbelieve—that is, one should believe that the claim, God exists, is false.

These more popular definitions leave a few gaps; for example, they all seem to implicitly assume that everyone is familiar with the subject matter and the arguments. If so, then what about our newborns, and those who have never been exposed to Judeo-Christian theism? Well, if they lack belief, which they do, then they are not theists; however, they do not disbelieve in God, so they are not positive atheists either. Therefore, they are a kind of negative atheist—but it’s the uncritical, uninformed kind and this is usually made clear just by qualifying one’s usage of the term with just such modifiers. Additionally, these popular definitions do not address whether the views are justified by appeal to reason. To be sure, people can be found in each of these categories whose beliefs have nothing to do with reason. For example, some ardent positive atheists have no rational justification for their views at all—perhaps because they are postmodernists or social constructionists, or because of political dogma, or because it’s part of the doctrine of some earth-spirit religion that they accept on faith, etc. Similarly, many theists openly reject reason, and even seem to find delight in saying that they believe despite flagrant logical contradictions—contradictions that they are all too happy to admit to. Where this distinction must be made—between appealing to reason and flouting it—the words “critical” or “rational” are often introduced, such as “critical positive atheist.”

If we take these definitions, and the caveats just mentioned, and couple them with our earlier discussion about believing in proportion to the evidence—that is, viewing “truth” as a probabilistic function of all relevant evidence and knowledge—then we can summarize everything into the following chart2:

 

Primary Term Other Common Terms Attitude Evidence Description
Positive Atheism Atheism Rejection Absolutely disprovable Logical impossibility
Positive Atheism Atheism Rejection Completely unfounded No direct or indirect support at all, even though there is no negative evidence
Positive Atheism Atheism Rejection Empirically disprovable Logically possible but odds are very close to 0
Negative Atheism Agnosticism Skepticism Empirically unlikely Odds are low
Negative Atheism Agnosticism Uncommitted Empirically indeterminate Substantial evidence, but about equally substantial evidence for alternatives
Negative Atheism Agnosticism None No conceivable evidence The claim in question is factually meaningless
Negative Atheism Agnosticism None Unaware of any relevant evidence Has never heard of the claim in question, or has never given it any thought.
Positive Theism Agnosticism / Pragmatic Theism Inclined Empirically likely Odds are high
Positive Theism Theism Acceptance Empirically provable Logically possible that’s it wrong, but odds are very close to 100
Positive Theism Theism Acceptance Absolutely provable Logical necessity

 

Conclusion

To tie our whole discussion together, let’s apply these ideas to the earlier Contactee Bob example. Contactee Bob doesn’t appear to be claiming something that is logically impossible, but he certainly hasn’t given us any reason to think that his claims are true; that is, he hasn’t given us any reason to suppose that his claims are more probably true than not; consequently, he has failed to meet the burden of proof. So, do I believe that Contactee Bob’s claim is true? No. Do I believe that Contactee Bob’s claim is definitely false? Technically, no, but I am not saying it’s a coin toss either, that it is as likely to be true as false. In the Contactee Bob example, we do have knowledge of background facts that bear on the likelihood of his claim being true. So, while there is no evidence supporting Bob’s claim, there is evidence supporting the alternative claim that Bob has deluded himself. We know, for example, that there are many supposed contactees for many types of strange beings, such as angels, aliens, various gods and goddesses, etc. As a group, these contactees report mutually contradictory experiences—even when the contactees are talking about the very same being. While this doesn’t conclusively prove that Bob is wrong, it is consistent with the alternative claim that all contactees of all of these “beings” have deluded themselves regarding the nature of their experiences. This background evidence, then, makes the competing self-delusion claim far more likely than Bob’s claim.

Bob also argued that it is self-contradictory to deny the existence of leprechauns because such denial implies omniscience; that is, we would need to have knowledge of all parts of the universe to know that there are no leprechauns anywhere in it. Our discussion up to this point should now make clear the fatal flaw in this argument: Bob assumes that absolute disproof is necessary to justify both absence of belief and disbelief. But, as we have seen, one can be absent belief or even disbelieve in the existence of leprechauns without at the same time claiming absolute knowledge that leprechauns do not exist. One should be at least absent belief based on Bob’s having failed to meet his burden of proof. But, when one also considers how this claim fits with our background knowledge and other known facts about claims of this type, then one can go further and have very good reason to suppose that Bob’s claims are extremely improbable, and therefore, to positively disbelieve them.

So, whether people claim that Santa Clause, telepathic space aliens, or God exists, the burden of proof is on them. None of these claims becomes true by default; that is, they do not become true on the grounds that they cannot be disproved. If those making a claim do so in a logically consistent way but cannot meet their burden of proof, then absence of belief is the best that can be rationally achieved. However, depending on other relevant facts (such as background knowledge), we may have good reason to suppose that the claim, while not absolutely false, is probably false in the extreme, thereby making positive disbelief the only rational option. Moreover, if they make a claim in a self-contradictory way, then we can be 100% sure that they are mistaken even without any evidence at all. Finally, there is also another way in which people may make their claims: they can use factually meaningless terms.

It is interesting to note that if one successfully demonstrates that religious terms like “God” are factually meaningless, then one cannot be a positive atheist, since to positively claim that God does not exist implies that the word “God” has meaning. I take the approach, along with Michael Martin3, that one can argue that religious terms are meaningless, and then go on to argue that if we assume, for the sake of argument, that the terms are meaningful, then contradictions immediately follow. So, to the extent I am successful with the first part, I am a negative atheist, and to the extent I am wrong in the first part but right in the second, I am a positive atheist.


1 P. Wesley Edwards, “Reason vs. Faith,” FreethoughtDebater, <http://www.freethoughtdebater.org/?p=107>, 2004.

2 This chart is influenced by something similar in Michael Scriven, “God and Reason,” in Critiques of God: Making the Case Against Belief in God (Amherst, NY: Prometheus Books, 1997), p.109.

3 An excellent, if somewhat advanced, overview of these issues is Michael Martin, Atheism: A Philosophical Justification, (Philadelphia: Temple University Press, 1990). A classic, and very easy to read work that hits the same points is George H. Smith’s, Atheism: The Case Against God, (Amherst: Prometheus Books, 1989).

The Bible: A Manual for Living?

Christians often take great pride in their ability to cite Biblical verse on a wide range of issues, and this is certainly understandable since they believe that the Bible is the Word of God.  Fundamentalists, in fact, see it as the inerrant word of God—a book that says what it means, and means what it says . . . period.  As such, the Bible is seen not only as a guide for finding “family values” and for making daily moral choices, but as a window into the “infinite” and perfect goodness of God.  In fact, many believers have referred to it simply as God’s “Manual for Living.”

One need not read the Bible very far, however, before realizing that “good” must be a very flexible word if it is to be applied to God.  In relation to the Biblical God, “good”—indeed, infinite and perfect goodness—evidently includes genocide, rape, slavery, torture, and a view of women as being worth little more than livestock. Christian apologists can often find scriptures that describe God in much more benevolent and even compassionate terms.  But finding them doesn’t delete these other scriptures.  Using such “nice” scriptures to defend the goodness of God by somehow offsetting the many other horrific scriptures, is like arguing that Stalin, Hitler, or Genghis Khan were really “perfectly” good because there were instances of their being kind to people who pleased them, as if this somehow completely negates their less compassionate moments.  Such things would no more offset the other evils than would giving candy to a child offset the crime of having earlier tortured her.

Sometimes the Old Testament examples of God’s atrocities are defended by saying that God’s moral “rules” were different in the Old Testament as compared with the New Testament.  Such an argument, coming as it does from people who claim to believe in an absolute, universal moral standard, is especially ironic.  In making such an argument, they are literally implying that right and wrong are not absolutes over time, but are absolutes only in the sense that whatever God says goes—even if what He says switches from wholesale genocide one day to pacifism the next.

Needless to say, there are always Christian attempts to reinterpret the scriptures we are about to read as saying something entirely different than is actually printed in the Bible.  The clarity of the following scriptures, however, makes such attempts transparent.  Remember to read what it says, not what apologists wish it said.

Before proceeding, and in all sincerity, I did want to provide the following advisory: The following scriptures depict graphic violence, including sexual violence, and may not be suitable for younger readers.  Please note also that the standard King James Version will be used unless otherwise noted.

Excerpts from God’s “Manual for Living”

Official orders, which include genocide and rape, have come not only from recent and not-so-recent tyrannical regimes, such as Hitler’s Berlin or the tent of Genghis Khan.  It has come also from what many Christians define as an infinitely good, kind, and merciful God.  Perhaps in the following scriptures the fact that young virgins were being spared (for obvious reasons and only after witnessing their parents and siblings being butchered before their eyes) is considered by some a demonstration of “mercy,” but I can imagine very few openly making such a claim.

Numbers 31:17-18 “…now therefore kill every male among the little ones and kill every woman that hath known man by lying with him.  But all the women children, that have not known a man by lying with him, keep alive for yourselves.”
1 Sam 15:3 “Slay both man and woman, infant and suckling.”
Ezek 9:6 “…neither have ye pity, slay utterly old and young, both maids and little children, and women.”

Such atrocities were often carried out as part of the worse kind of military aggression: one that goes beyond the seizing of power to the actual extermination of the original inhabitants sparing some of the women and children to serve as slaves.  Indeed, one can’t help but think of Hitler’s lebensraum policy when reading scriptures such as these.  If God’s name were removed from these passages, and replaced by “Hitler,” for example, what would the reaction of a good Christian be?  I suspect it would be horrified disgust, anger, and indignation—but only after the name change.  Of course, for many people, horrified disgust, anger, and indignation are felt regardless of whose name is assigned to these deeds.

 

Deut 7:1-2 “When the Lord thy God shall bring thee into the land whither thou goest to possess it, and hath cast out many nations before thee…thou shall smite them and utterly destroy them…nor shew mercy unto them.”
Deut 20:11-14 “…that all the people that is found therein shall be tributaries unto thee, and they shall serve thee… And when the LORD thy God hath delivered it into thine hands, thou shalt smite every male thereof with the edge of the sword. But the women and the little ones, and the cattle, and all that is in the city, even all the spoil thereof, shalt thou take unto thyself; and thou shalt eat the spoil of thine enemies, which the LORD thy God hath given thee.”

Today, mild-mannered Christians often come to your door, presumably moved by God’s spirit.  Many of them will describe how their loved one’s health benefited from prayer, and that they were moved to demonstrate God’s love to others through charity, caring for the needy, and preaching the “Good News.”  They believe that through such deeds, people (in God’s words), “might know I am the Lord.”  But this pleasant strategy represents quite a change of heart from earlier methods employed by God Himself:

Ezek 20:26 (RSV) “I defiled them…making them offer by fire all their first born, that I might horrify them; I did it that they might know I am the Lord.“(italics added.)

Of course, many Christians argue that God’s wrath is reserved for you only if you abuse your “free will.”  God, after all, gives everyone a choice—pick the right one, or else—but a choice, nonetheless.  But does He really give everyone a choice?  He seems to slaughter children a great deal, with a particular preference for firstborns, as we just saw.  It is entirely unclear what choice a newborn failed to make correctly.

Ex 12:29:30 “…the Lord smote all the first born in the land of Egypt, from the firstborn of Pharaoh that sat on his throne unto the firstborn of the captive that was in the dungeon…for there was not a house where there was not one dead.”

I suppose it could be argued that older children caught poking fun at some older bald gentleman might be old enough to know better, though it would seem that the “Christian” thing to do in such a case might be to ground them for a day or two and tell them to apologize.  Actually, in His “manual for living,” the Lord, described by so many Christians as loving and good, felt it was appropriate to send 2 bears to quite literally rip apart 42 children for the crime of making fun of Elisha’s bald head:

2 Kings 2:223-24 “And he went up from thence unto Bethel: and as he was going up by the way, there came forth little children out of the city, and mocked him, and said unto him, Go up, thou bald head; go up, thou bald head.  And he turned back, and looked on them, and cursed them in the name of the Lord.  And there came forth two she bears out of the wood, and tare forty and two children of them.”

Few jurors today would feel comfortable sentencing adult felons to being ripped apart by wild animals, and it seems unlikely that they would feel any more comfortable upon discovering that the intended victims were children.

For those who would suggest that God hates doing such things, but somehow feels compelled to (however one reconciles such an idea with His being all-powerful), there is in fact some rejoicing on the part of God during such violent episodes:

Deut 28:63 “…so the Lord will rejoice over you to destroy you, and to bring you to nought…”

It is interesting to ask what a Christian would say if anyone other than God had committed such crimes. Would any Christian say, “but wait, maybe there was a perfectly good reason.”  Yet, Christians, when faced with these scriptures, make just such an appeal on behalf of God, as if such monstrous evils can be perfectly understandable in some cases.  These defenders will often argue that the bigger “context” must be considered, as if genocide, rape, and slavery aren’t inherently and universally evil, but are actually good things in certain contexts.  Many people, of course, insist that such things are universally and absolutely evil, and that no “context” can change that fact.

While humans are often forced to make lesser-of-two-evil choices due entirely to their limited powers, is it completely unclear how God, with supposedly unlimited power, should be similarly constrained.

Not many Christians these days would argue that slavery is a good thing.  Indeed, many would argue that it’s a great evil, one that every Christian should fight in the name of Christ, particularly since they believe that such things are against God’s teachings.  Such teachings did not come from the Bible, however, and must be believed despite the Bible’s actual teachings.  God’s teachings on the matter look a little more like this:

Exod 21:20-21 (NIV) “If a man beats his male or female slave with a rod and the slave dies as a direct result, he must be punished, but he is not to be punished if the slave gets up after a day or two, since the slave is his property.”
Deut 15:17 “Then thou shalt take an aul, and thrust it through his ear unto the door, and he shall be thy servant for ever.  And also unto thy maid-servant thou shalt do likewise.”
Lev 25:45, 46 “…of the children of the strangers that do sojourn among you, of them shall ye buy, and of their families that are with you, which they begat in your land, and they shall be your possession…they shall be your bondmen forever.”
Ex 21:7 “And if a man sell his daughter to be a maidservant, she shall not go out as the menservants do.”
Titus 2:9-10 (RSV) “Bid slaves to be submissive to their masters and to give satisfaction in every respect”

Indeed, God had whole peoples enslaved.  Either He thinks slavery is appropriate in some circumstances, or He knowingly committed an evil.

Joel 3:8 “I will sell your sons and your daughters into the hand of the children of Judah and they shall sell them to the Sabeans, to a people far off; for the Lord hath spoken it.”

These scriptures are perhaps not entirely unexpected when they are seen as what they are: descriptions of the angry, jealous, vindictive, and ill-tempered deity(ies) of primitive, nomadic tribes living thousands of years ago.  In that context we would also not readily expect women to be considered particularly valuable, or even as human as men.  Indeed, that is what the Bible makes clear, time and time again.  Such a view is usually justified in terms of inherited guilt; after all, it was Eve who was the guilty party in the Garden of Eden.

1 Tim 2:11-14 “Let the woman learn in silence with all subjection.  But I suffer not a woman to teach, nor to usurp authority over the man, but to be in silence.  For Adam was first formed, then Eve. And Adam was not deceived, but the woman being deceived was in the transgression.”
1 Cor 14:34-35 “Let your women keep silence in the churches for it is not permitted unto them to speak; but they are commanded to be under obedience…And if they will learn any thing, let them ask their husbands at home: for it is a shame for women to speak in the church”

But it doesn’t just stop with demanding that women “keep silent.”  There is a kind of dirtiness about women, an uncleanness.  Indeed, a women is made even dirtier by having dirty girl children rather than nice clean boy children:

Job 25:4 “…how can he be clean that is born of a woman?”
Lev 12:2, 5 “If a woman have conceived seed, and born a man child: then she shall be unclean 7 days…but if she bear a maid child, then she shall be unclean 2 weeks.”

In fact, a woman who is not a virgin when she marries deserves to die according to the Bible.  Note that there is no similar requirement on the man.  Actually, such a woman deserves to be stoned to death; a tortuously painful way to die, and undeniably cruel and unusual punishment by any standard most of us are familiar with.  Presumably, our own laws and practices today could be seen as an affront to God for granting women the right to live in such cases.  It is also painfully easy to imagine the number of virgins mistakenly slaughtered under the very weak rules of evidence employed in the following scripture.

Deut 22:20-21 “But if this thing be true, and the tokens of virginity be not found for the damsel: then they shall bring out the damsel to the door of her father’s house, and the men of her city shall stone her with stones that she die: because she has wrought folly in Israel, to play the whore in her father’s house: so shalt thou put away evil from Israel.”

Today we view rape as a crime because of the violation it represents of a woman’s rights, and the pain and anguish it causes the woman.  In other words, it is a crime because of the injustice of it to her.  In the Bible, the effects on her seem to be utterly beside the point; it is the father that is considered the injured party.  For example, a rapist is forgiven if he pays, not the victim, but her father; and in addition, agrees to marry the victim—she’s got nothing to say about it.

Deut 22:28-29 “If a man find a damsel that is a virgin, which is not betrothed, and lay hold on her…then the man that lay with her shall give unto the damsel’s father fifty shekels of silver, and she shall be his wife; because he hath humbled her, he may not put her away all his days.”

We’ve already seen God bestow the word “good” on the slaughter of children and even newborns among wartime enemies, but this act is considered particular virtuous when the newborns are from one’s own family and tribe—with the infants thrown in, as it were, with the first born of the live stock.  It is interesting, when reading passages such as these, to imagine Fundamentalists showing disgust–as they doubtless do– when they read of the human sacrifice practices of the early religions of other cultures, such as the Aztecs and Inca:

Ex 13:1,2 “And the Lord spake unto Moses, saying, Sanctify unto me all the firstborn, whatsoever openeth the womb among the children of Israel, both of man and of beast: it is mine.”
Lev 27:28-29 “Notwithstanding no devoted thing, that a man shall devote unto the Lord of all that he hath, both of man and beast, and of the field of his possession, shall be sold or redeemed: every devoted thing is most holy unto the Lord.  None devoted, which shall be devoted of men, shall be redeemed; but shall surely be put to death.”
Ex 22:29-30 “Thou shalt not delay to offer the 1st of thy ripe fruits, and of they liquors: the first born of they sons shalt thou give unto me. Likewise shalt thou do with thine oxen and with thy sheep.”
Ex 34: 20  But the firstling of an ass thou shalt redeem with a lamb: and if thou redeem him not, then thou shalt break his neck.  All the firstborn of thy sons thou shalt redeem.  And none shall appear before me empty.”

How different are such practices from the offering of virgins to a volcano god?  Inexplicably, satisfying God’s thirst for blood appears to be somehow connected with the origin of male circumcision.  In one of the stranger passages of the Bible we have God ambushing Moses on the way to an inn—with the intention of killing him—but deciding not to when Zipporah cuts off the foreskin of her son, and throws it at God’s feet.  God is calmed down by this, and decides to let them go.

Ex 4: 24-26 “And it came to pass by the way in the inn, that the Lord met him, and sought to kill him.  Then Zipporah took a sharp stone and cut off the foreskin, and cast it at his feet…So he let him go.”

Beyond providing yet another example of the need for bloody offerings to appease an ill-tempered and brutal deity, this passage also has God acting in a manner that is far from that of an all-powerful and all-knowing being.  Indeed, He cannot simply cause Moses to die with a mere thought, but like a murderous human must physically attack his intended victim.  Further, he cannot know much about the future if He unexpectedly gets a slice of human flesh thrown at His feet (implying also that He was physically standing there like a man), which causes Him to change His mind.

With all these blood sacrifices and mutilation of people of all ages, one might expect cannibalism to enter the picture at some point.  One would not be disappointed. God himself directly causes these instances of cannibalism.  Once again, one must assume that Fundamentalist Christians believe this is a wonderful and good thing in certain contexts in order to ensure that God also remains wonderful and good.

Lev 26:29 “And ye shall eat the flesh of your sons, and the flesh of your daughters shall ye eat.”
Jer 19:9,12 “And I will cause them to eat the flesh of their sons and the flesh of their daughters, and they shall eat every one the flesh of his friend …Thus I will do unto this place saith the Lord, and to the inhabitants thereof”

 

Conclusion

What picture do these scriptures paint of the Judeo-Christian God?  The picture is that of a perhaps typical tribal deity, characteristic of many early peoples.  Like a volcano god who needs the sacrifice of virgins as a ransom not to destroy the people below, the Biblical God is also brutal, ill-tempered, jealous, and demanding of regular doses of infant blood as the price for the tribe’s success in carrying out genocidal wars of aggression against its neighbors.

I can certainly anticipate the objections to this article: (1) I’ve taken the scriptures out of context; (2) I’ve misinterpreted them; (3) God changed many of those laws in the New Testament; (4), these things should not be judged from the limited vantage point of mere humans—it is part of a bigger picture that only God can see; (5), it the result of inherited sin due to Adam’s Fall; and (6) it is Satan and man, not God, who have ultimately created all this evil.

(1) As already mentioned (and as should be obvious in any case) some evil things are context-independent.  No context can make an inherently evil thing turn into a good thing.  This position is, ironically, a kind of moral relativism, which Christians are normally loathe to embrace.   The flaw in this “argument from context” is easily seen when it is parodied:  “The deeds of Hitler, Stalin, and Ted Bundy are evil only in certain contexts, in others they are beautiful and wonderful and are proof of their infinite goodness.”

(2)  These passages are fairly straightforward descriptions of horrific crimes, and include the specific detailing of laws.  Reinterpretation here can take two forms:  one, that these scriptures were metaphorical; or two, that they were mistranslated.  The first approach undermines the claim that the Bible is literally true, and we can then get on with arguing that the Genesis story, or anything else, is also metaphorical.  This is indeed the approach of more liberal Christians.  Liberal or not, however, to “reinterpret” the sum total of these passages into saying the opposite of what they actually say (and there are many, many more examples) leaves one justifiably open to the charge of arbitrarily reinterpreting the Bible to say what one wishes it said.  It would be as if I reinterpreted all the literature on the roundness of the earth to be metaphorical, while insisting that it all really supports a flat earth; or if I reinterpreted the sentence the “car is black” to mean the “car is white.”  This liberal approach invalidates the inerrancy of the Bible.  Either there is an error of translation or an error of fact.  In either case, one is admitting that there is an error.  Such an approach also allows one to have the Bible say whatever one wants it to say, without regard to what it actually does, in fact, say.

(3) This position, even more so than the context argument, is the worst kind moral relativism.  It suggests that what is morally disgusting to us today was somehow morally wholesome and beautiful to us just a few thousand years ago.  God did an about-face, and so we had to as well; if He does it again, then our frown at these atrocities would, under this view, once again turn into a smile.  Presumably we need to check before deciding how to react.

(4) This objection is met in detail in the Does Morality Depend on God? article.  I will, however, make this one point here:  If we are not competent to judge God to be evil; then by exactly the same reasoning, we are also incompetent to judge God to be good: it cuts both ways.

(5) The notion of inherited sin and Jesus’ sacrifice deserves an article of its own.  However, I will speak to it briefly here.

I often hear Christians describe, in a tone of gratitude, how Jesus died to “pay the price” for the sin we “inherited” from Adam and Eve.  Frankly, that this makes any kind of sense to otherwise thoughtful people is something I find utterly mystifying.  The story embodies two very bizarre notions that Christians unblinkingly accept as being somehow obvious truths:  one, inherited guilt; and two, that a human sacrifice is the appropriate price to pay for that guilt.

First, the idea that people inherit the guilt of their ancestors is certainly an ancient one, and more than a little out of place in the 21st Century.  One has to ask such Christians to evaluate the following analogy:  You are a juror and the prosecution is arguing that a 6 month old infant should be burned alive because it was discovered that its great, great grandfather committed a terrible crime.  The prosecution then interrupts the shocked outcries of the jurors by quoting from the Bible’s many references to inherited guilt and punishment.  The stunned silence is then suddenly interrupted as one of the jurors, taking the biblical lead, jumps up and says “Kill me, so the child becomes innocent of the crime of his great, great grandfather.”  The judge and prosecutor look at each other and agree that this will pay the debt owed, atoning for the crimes of the baby’s ancestor, thereby making innocent the baby and all the other descendents of this ancient criminal.

I can only assume—hope really—that Fundamentalist Christians would be unwilling to go along with such a scenario, and that this sounds as bizarre and morally repulsive to them as it does to me and most people.  Yet this is the kind of moral logic we are to admire on the part of God for making guilt a kind of genetic characteristic, and further, requiring a human blood sacrifice of someone unrelated to the crime (i.e., Jesus) to make it all alright again.  The analogy is off in one respect: God intended to torture in hell not just one baby, but all future generations of mankind (billions of yet unborn people) if He didn’t get a blood offering.

(6)  This objection certainly contradicts the notion of God’s omniscience (being all-knowing) since it seems to suggest that God created Satan without knowing that Satan would become evil.  But, more importantly, it contradicts the Bible itself, for God makes it clear that He, and not Satan, creates evil:

Isa 45:7 “I make peace, and create evil”

Does Morality Depend on God?

Introduction

I have rarely engaged in a debate with a theist where the issue of morality justification has not come up.  The theist’s complaint typically takes the following form.

If there is no God, then why is it wrong to murder and  steal? Even if you don’t want to murder and steal, on what grounds can you criticize someone who does, since morals must be completely relative and arbitrary to an atheist?  Without God there is no criterion for deciding what is good and evil beyond the whim of the individual. In other words, without God there is no way to answer the question, “Why is x wrong?”  As a believer in the one true God, I know why, and I know why in absolute terms.

Of course, most thoughtful theists recognize that non-theistic ethical systems can and do exist, and that in fact most atheists hold moral values not entirely unlike their own.  Indeed, most ethical systems the world over, and throughout history, share a considerable core set of values on issues relating, for example, to minimizing the suffering of innocents, and putting the needs of family, friends, and community ahead of one’s own selfish interests.  So, the debate here is not so much about whose particular body of moral codes is the right one. Rather, the debate centers on meta-ethics; that is, the means by which people justify their choice of an ethical system.  The theist can accept that an atheist might believe that murder is morally wrong based on one or another atheistic ethical system.  The theist’s complaint, however, is that the atheist’s ethical system is completely arbitrary; that the atheist cannot justify using his ethical code beyond saying, “Just because I want to.”

So it is this meta-ethical angst that is behind the theist’s heart-felt question, “Why is anything wrong if there is no God?”   Alarmingly, more than a few ask this out of a genuine view that without the fear of punishment and the promise of reward there is no reason to be good.  They hold a crude kind of when-the-cat’s-away-the-mice-will-play outlook that equates “good” to that which is rewarded, and “bad” to that which is punished.  For these folks, without someone holding carrots and sticks, there is no reason to be good; no way to know what is good. Thankfully, most thoughtful theists recognize the inherent amorality of such a simplistic approach; instead, they go to great lengths to explain not only that God is perfectly good, but that He forms the necessary “ground” of goodness, and that therefore morality really does depend on Him.  (That being said, it is surprising how often otherwise thoughtful theists will appeal to brute power when challenged to morally justify some Old Testament God-ordered atrocity, such as when they argue that since God made us, He can do as He pleases—just as a potter can do as he pleases with his clay pots.)

The thesis of this article is not only that morality does not depend on God, but that any ethical or meta-ethical theory based on such a claim necessarily commits several fatal errors: first, it necessarily commits one to an arbitrary and indefensible definition of right and wrong (which is ironic given the theist’s charge that it is non-theistic ethics that is necessarily arbitrary); second, it makes any definition of “good” vacuously, and—as a guide for human behavior—dangerously tautological; and third, such a meta-ethical view necessarily leads to ethical systems that value unthinking obedience to rules above all else.

The Problem of Circularity

The hidden assumption in the theist challenge is that, unlike the atheist, the theist can coherently answer the question, “Why is x good?”  A common debating error is to let this go unchallenged, while moving directly to a defense of an atheistic meta-ethics.  Generally, the theist’s answer to the Why question is because it is God’s will. To this a fair response is, “Why follow God’s will?” Now to this the theist’s response can be either the avoid-punishment/gain-reward answer or “Because God’s will is good.”  We’ve already eliminated the first part, so let’s look at the second part: “Because God’s will is good.”

Now for the statement, “God is good,” or more generally, “X is good,” to even make sense, we need some idea of what “good” means.  For example, if I say, “Fred is perfectly zugblub,” then you have no idea what I mean unless you have some idea of what “zugblub” means.  Suppose after pointing this out to me, I respond, “Fred is the very standard by which zugblub is defined; Zugblub is part of the very essence of Fred.  Indeed, Fred actually forms the necessary ground of all zugblub.  That is what zugblub means.”

This so-called definition of zugblub communicates no information.   The problem is that I have not defined zuglub independently of Fred.  All my definition amounts to is different ways of saying “Fred is Fred” and “zugblub is zugblub.”  These are true statements to be sure, but not particularly informative ones.  In other words, my definition of zugblub is tautological—an empty truth.

For example, one critic of my original article succinctly captured this phenomenon when he said, “God says it is good because it is good. How do we know that it is good? Because God’s very nature is good. God is the standard.” 1

This is a straightforward tautology:

1.  What God says is good.

2.  God is the standard by which good is defined.

By (2) “good” means that which God is, which includes all that He says, does, and wills.  By substitution (1) becomes “What God says is what God says.”

Now, it seems clear that theists don’t actually intend the statement, “God is good” to be a statement of identity.  In other words, “good” forms part, but not all, of the definition of God.  To use Kai Nielsen’s example, theists say “God is good” in the way that people would say “Puppies are young.”  You can see this by examining everyday statements such as “It is good that you gave money to that charity.”  This is certainly not equivalent to saying “ It is God that you gave money to that charity.”   Though, to the theist, it might be equivalent to saying, “It is consistent with one of God’s characteristics that you gave to that charity.”

So we can see that tautologies are not limited to statements of identity.  A statement like “Puppies are young,” or “Bachelors are unmarried,” is sometimes called an analytic statement, which means that its predicate forms part of the definition of its subject.  Such statements are tautological in that they are true by definition.  Identity statements communicate no information beyond stating a rule of logic.  Analytic statements communicate no information beyond stating a rule of language. But analytic statements can’t even do that unless we understand their predicates (e.g., “young”) independently of their subjects (e.g., “puppies”).  As Nielsen puts it,

If we had no understanding of the word “young” and if we did not know the criteria for deciding whether a dog was young, we could not know how correctly to apply the word “puppy.”  Without such a prior understanding of what it is to be young we could not understand the sentence, “Puppies are young.” 2

Another theist critic rather eloquently illustrated this kind of tautological reasoning as follows,

Goodness is not defined simply because God wills it, nor is goodness determined by a standard existing outside of God. Obedience should be seen as part of the essence of mankind – as we were created by the Creator, in His image, we have at our core the essence of His character. That is, as God is the definition of goodness and holiness, not only our self-awareness but also our standard for what is good, what is moral, flows from the immutable character of God. As God is the very essence of goodness – His character -nothing will be willed by Him in contradiction of His nature . . . Further, the concepts of human and divine morality, or goodness, are not independent things. Our recognition of goodness comes directly from the fact of our creation in His image. 3

Let’s break this down so the tautology is more apparent:

1.  “Goodness is not defined because God wills it.”

2.  “God is the definition of goodness and holiness.”

3.  “Nothing will be willed by Him in contradiction of His nature”

4.  “Human and divine morality, or goodness, are not independent things” since “goodness comes directly from the fact of our creation in His image.”

(2) says that God is the definition of goodness, which again converts “God is good” into either “God is God” or “Good is good,” or an analytic statement in which “good” is not understood independently of God.

(3) is the statement that God will not act in contradiction to his nature. But of course nothing acts in contradiction to its nature. This is simply stating that God will only do those things that God does. But this is no more informative than saying that fish will only do those things that fish do, which gives us no information about the nature of fish or just what it is they do.  In combination with the other statements, it is not clear what is intended by statement (1) other than that of adding a stipulation that things don’t act within their natures because they will it, but only because it is consonant with their natures, which is just a restatement of both (2) and (3).  (4) simply reasserts the theist claim, namely, that our sense of right and wrong cannot be understood outside of God’s having given it to us, which is the question at issue, and which forms one leg of the theists’ circular loop :

(1) We know something is good because it is a reflection of God’s nature within us.

(2) God’s nature is good.

(3) We know God’s nature is good because of (1)

This turns (1) into “we know something is good because we know it is good.”  And turns (3) into “We know God’s nature is a reflection of God’s nature.”

The theists’ arguments again reduce to uttering either identity statements or analytic statements—tautologies.  The theist’s other recourse is to say, “Well, when you’re dead you’ll know, but then it will be too late.”  Of course, this is just the retreat to threats of punishment and promises of reward.  Assuming the theist does not want to take the coercion approach, then to escape the preceding tautology the theist needs an independent criterion of good upon which to judge that God is, in fact, good.

Not so, according to one theist:

If we derived our judgments of good apart from God, then the theist might be making an independent judgment. But this begs the question. The theist claims that God communicates to us our sense of judgment for determining right and wrong.  Therefore the standard is not independent but dependent on the very person and ground of morality. 4

While being yet another variation of the same tautology already reviewed, this seems to add the additional claim that we gain direct knowledge of goodness in such a way that no independent judgment is or can be made.  In other words, we are simply given awareness of good by way of revealed knowledge. This knowledge comes directly from God, bypassing the usual critical faculties (similar to so many revealed truth claims of so many mutually contradictory religions and sects); therefore, no independent judgment or reasoning of any kind is necessary, or perhaps even welcome.  Now, exactly how one knows this sense comes from God is not made clear.  How does one separate “true” revelatory claims from false ones? How do we know this additional revelatory sense is even reliable?  More immediately relevant here though is how does one know this sense of good corresponds to what is actually good?  This is, after all, the point at issue.  Even if one is being fed sensations or knowledge by God, how does one know he isn’t being deceived?  In short, How does one know God is in fact good?

Well, this theist’s answer appears to be simply that God is good because God says He is, and that God is the very “ground of morality.” But “ground of morality” is just another way of saying “is the standard of good,” or the “source of goodness,”  or, once again, “God is the definition of good.”  As usual, we’re right back to “good” being defined in terms of God, without any independent idea of what “good” means; therefore, the claim, “God is good” reduces to the empty truth that “God is God,” or at most, “God has God-like properties.”  Since God is good by definition, adding that God is good because God is the ground of morality just reduces to “God is good because God is good,” or, “God is God-like, because God is God-like.”  The statement, “God is good” is already tautological; repeating that tautology within yet another tautology doesn’t shed much light on the matter.

Some theists feel that they are not making “good” and “God” self-referential in this circular argument sense, but instead, have a logically independent concept of good and take as their starting premise the claim that “God is perfectly good.”  In other words, while they believe that “good” is an independent concept, it applies to God perfectly and completely.  These theists then add that while “good” exists independently of God, we just aren’t qualified to actually test or judge whether or not God is good in any particular case—we don’t see the  “bigger picture.”  While it is logically possible for God to be evil, He never is so.  The theist claims that we do not know this by experience (since no experience would ever be accepted as counterfactual evidence of God’s goodness), but we know it as a matter of faith.

What can we say about this approach?  First, notice that we still have no idea what good means independently of God.  All we can say is that, “X is good because God ordered it, God wants it, etc.”  There is no conceivable test of God’s goodness because it is being implicitly defined as whatever God does.  In other words, whatever God does is assumed to be good a priori.  This renders the notion of “good” factually meaningless.  The theist seems to be saying that there is an independent concept of “good,” but that we have only a partial understanding of it. However this partial understanding can be completely overridden by God’s “bigger picture” perspective, such as in the Old Testament where God sends two bears to rip apart forty two children for the heinous crime of teasing an old man about being bald (2 Kings 2:23-24).  Such a “partial” conception of good is no conception at all when it can be completely overridden like this.

Second, even if an intelligible, independent notion of “good” were being offered, the question as to how one knows that it applies to God is not answered; it is simply assumed on “faith.” But this hardly clarifies matters since “faith” is simply a label for the act of a deliberate, bare-faced begging of the question.

So what does our discussion up to this point say about theists’ supposedly “objective” and “absolute” ethics?  Well, a theist-dependent ethical system rests on the following.

1.  They feel they should follow what they believe to be God’s will.

2.  They “should” (i.e., ought to) follow God’s will, because they believe God’s will is good.

3.  They believe God’s will is good, because God is the very standard by which good is defined.

(3), then, is the bedrock of any theistic-dependent ethical system, which I’ve shown to be a tautology: God’s will is good, and “good” means that which God wills (since He’s the defining standard).  To be based on a tautology is to be based on nothing at all.

Literally any ethical code you care to dream up can be based on an appropriately chosen tautology.  For example, let’s look at the application of the very same theistic logic to justify an ethical system based on telepathic space-alien communication.

“Good” is that which space aliens telepathically communicate to me, and those space alien communications are the very standard of goodness—they are what I mean when I use the word “good.”  Why do I believe these space-aliens are good?  Because they say so and because it is a matter of faith.  They even say that they are the ground of morality, which I also take as a matter of faith.  How do I know that these communications are from space-aliens and not from my own mind?  Because I have faith that it is not, and because I don’t make the mistake of trusting my imperfect human reason to judge, question, or close my heart to what they are telling me.  When my heart, or my space alien organization, tells me that space-aliens have ordered me to stone adulterers to death (as in the OT), I know that is good, even if it is evil when a human orders such deeds.  Why? Well, because the space aliens are the standard of goodness, so if they ordered it, it is good by definition; and also because the space aliens genetically engineered all life on our planet, and since they made us, they can do as they please.  After all, do clay pots question the potter who made them?  This faith-based commitment is why my ethical system is absolute.

Clearly, making a commitment to any such absolute moral system, whether based on space-aliens, one or more gods, or one’s ancestors, can be considered only an extremely arbitrary act.

The Implications and Conclusion

Now, at this point some theists forget their original assertion that non-theistic ethical systems are arbitrary and subjective, unlike their own system.  They will then begin to make statements like all “ultimate standards are based on tautology.”  Some will even go so far as to argue that reason itself is the problem here, that any ethical system is by its very nature based on an irrational commitment, i.e.,  based on “faith,” which is where faith in the Lord Jesus Christ (for Christians) comes in.  All things considered, such comments seem like a fairly generous concession to my claim that theistic-based meta-ethics is fundamentally irrational and arbitrary.  These theists, having fallen back behind a wall of irrational commitment, do achieve at least one thing they wanted: they can no longer be criticized.  But they pay an exorbitant price.  They can no longer criticize anyone else for making a different irrational commitment.  They can no longer criticize anyone else for choosing a different ethic based on faith in another God (or even the same God in any of the myriad, mutually contradictory Judeo-Christian sects), or political ideology, or even space aliens.  Each system becomes equally “justified” by these irrational appeals to faith and/or revelation, both of which are conveniently “beyond” reason.

There is another especially pernicious aspect to ethical systems that depend on God in this way.  By pushing concepts of good beyond the scope of reason and equating it with the supposed will of God—whichever of the thousands of interpretations of this Will a theist may be referring to—”good” becomes something we feel no longer qualified to judge.  Since “good” is equated with God’s nature, and since God’s nature is somehow beyond our capacity for comprehension and critical analysis, then the only way one can know he or she is doing the right thing is to simply obey.  Indeed, obedience to rules effectively becomes an end in itself, even if those rules include such things as found in the Old Testament: stoning brides to death who were found not to be virgins; sacrificing human infants; slaughtering and enslaving men, women, and children (i.e., targeting civilians) – such as in 1 Sam 15:3, “Slay both man and woman, infant and suckling.” (See my article on this site, The Bible: A Manual for Living?, for these and many other similar references. )

As we’ve seen, when confronted with what would normally be considered crimes against humanity, the theist will respond in various ways, none of them satisfactory:  “We are His creations, and He can do as He pleases,” or “God is good regardless of His actions, just in ways that are beyond us.”  Stripped of our own ability to know an evil deed when we see it, we now have to first ask:  “Who did it?”  One is reduced to saying, “I don’t know if it was evil until you first tell me whether or not God did it. I’ll even do the deed myself, no matter how bloody or genocidal, if you first convince me that God ordered it.”  Uncritical obedience to orders ultimately becomes the only criterion of moral behavior, even when the rule is infanticide, such as illustrated in Gen 22:2 where Abraham is told to slaughter his own son.  Indeed, Abraham’s willingness to blindly follow orders – even with the tortured, frightened screams of his own child in his ears – is held up as the supreme example of moral “goodness” we should all follow.

If it is true, as some theists claim, that “God communicates to us our sense of judgment for determining right and wrong,” then shouldn’t we naturally sense moral beauty in these O.T. atrocities, since they were sanctioned by God?  Fortunately, few do.  But even if our moral instinct is one of revulsion, we are told to remember that good is defined by God. Anything He does is good by definition, no matter what: healing sick children or having them ripped apart by wild animals.  Curiously, many Christians have often complained at this point that “things were different in the Old Testament.”  In other words, their “absolute” morals were different in the past.  Such a view ironically turns their absolutism into a rather extreme form of moral relativism.

Some theists will argue that the essential difference between the theistic and non-theistic approach is that their approach has ethics deriving from “outside” of ourselves, outside of humanity, which is the only way ethics can be considered objective.  This certainly begs the central question: theistic ethics comes from “outside us” only to the extent that God really exists and is the source of morality; otherwise, theistic ethics comes as much from “inside us” as does their sense that God talks to them.  Regardless, it is not at all clear that only things “outside us” can be objective.  After all, the fact that humans have two eyes is an objective fact even though it does not exist “outside” humans—it is an observable, testable, and therefore, objective fact about our natures, about what it means to be human. But if the theist is right, if one can be objectively moral only if those morals exist or derive from “outside” of the individual, then God, by exactly the same logic, cannot be said to be a moral being—unless, of course, the theist is saying that morals derive from outside of God. 5   This is where special pleading is typically introduced.  The theist will simply exempt themselves from their own rules:  “Your explanation must meet these conditions; however, my explanation (God) does not.”  Such flagrant tactics are, of course, unavoidable since the theist is contradicting his or her own assumptions.

Some have argued that since God is both perfect and omniscient, then His morals are perfect and therefore truly objective in a way that would not be possible for imperfect humans.  In addition to confusing the notions of “objective” with “perfection,” this approach quickly becomes incoherent, and again begs a central question. First, the notion of omniscience undermines the idea that God has free will:  If He knows all of His future actions and decisions, then He is not free to change them; if he doesn’t yet know his future actions, then he is not omniscient.  Second, the existence of natural evil (disease, earthquakes, etc.) contradicts any meaningful notion of God’s goodness in light of his supposed omnipotence.  Either he can prevent infants from dying slowly of parasitic microorganisms (creatures He presumably designed for just this purpose) and refuses to, in which case He is not good; or He can’t prevent it, in which case He is not all-powerful.  Third, whether or not we always do accurately discover what is morally right for us does not, in and of itself, alter the fact that there is something objectively right for us.  For example, if we erroneously concluded that certain foods were good for us that were actually poisonous, then we  would die.  In other words, our capacity for error does not alter the fact that there are objectively poisonous things for us to eat, as well as healthy things for us to eat.  Finally, of course, this again begs a central question, namely that of God’s existence as a perfect and good being.

One thing we haven’t addressed up to this point is the implicit theist assumption that if we accepted their meta-ethical theory, then we would suddenly know what the moral “rules” are—presumably through faith or revelation.  This is certainly an amazing assumption when one considers that the number of Christian sects is well into thousands, let alone the fact that the whole of this internally bickering Christendom represents only a minority of the world’s religions.  As the basis for an “absolute” ethical system it leaves much unexplained about how to tell the “right” interpretation of God’s Will from the many, many “wrong” ones.  Presumably choosing the “right” interpretation is as much an application of irrational commitment (i.e., faith) as choosing to believe in the particular deity(ies) in the first place.  Such an approach leaves us with precisely what we would expect:  an anarchy of “absolute” moral systems and religious codes, each of which proclaims itself “objective and absolute” and “above” mere human reason (and therefore immune to rational debate and investigation).  So when a member of any of these countless belief systems points to the non-theist and proclaims “you have an arbitrary and unjustified moral code” I can’t help but think of the biblical story to the effect that only he who is without sin should cast the first stone.

We’ve shown the fatal flaws in any God-based meta-ethical theory, but where does that leave us? Is there no basis for ethics at all, or are we forced to accept some relativistic one?  I explore what I believe is an objective, naturalistic meta-ethical theory in the article, Darwinian Meta Ethics.


1 Critic from <http://groups.yahoo.com/group/vantil-applied>, forwarded to me via personal correspondence by Dawson Bethrick.

2 Kai Nielsen, Ethics Without God, (Amherst: Prometheus Books, 1990), p.59.

3 Bill Hale, personal correspondence.

4 Critic from vantil-applied.

5 Dawson Bethrick shared this observation with me in a personal correspondence.

Complexity, Probability, and God

A very old, highly intuitive, and still common argument for the existence of God is the Argument from Design.  The thrust of the argument is that the ordered complexity exhibited by the universe as a whole, and in particular by living things, is evidence that there must be an intelligent designer, since the existence of such complexity cannot be otherwise explained.The word “design” as used in this argument refers to the order and incredible complexity not just in biological systems, but also in the very intricate laws of nature that operate throughout the universe.  But, of course, “design” and “complexity” are not the same.  The theist can’t just point to an example of complexity and say, “See, it’s complex, so it is designed.”  That’s just question begging; after all, the whole question at issue in this debate is, “Does complexity imply design?”

We should clarify here what it means to say that a thing is complex.  To say a living thing is complex is to say not only that it has a lot of parts, but that the parts are arranged in a way that is unlikely to happen by chance and those same parts cooperate in a functional way.  For example, in living systems the various parts work together to do such things as run, swim, and fly—let alone live.  Complex non-living systems in the context of the design argument are usually restricted to those that appear to serve a purpose, such as the system of “parts” called the Solar System, which is so configured that the earth is kept at just the “right” distance from the sun to maintain its orbit.The flaws in the Argument from Design are several.  First, using the existence of complexity as a proof for God amounts to a self-contradiction; second, a common form of this argument (made famous by William Paley) misunderstands how humans identify intelligent design; and third, a common version of this argument that is based on probability misrepresents the role of randomness in evolution.1  Let’s look at each of these in turn.

Setting aside any appeal to Darwinism for the moment, what could it possibly mean to say that complexity in living things implies the existence of an intelligent designer like God?  One can only assume that God, whatever that term might refer to, must have at least as much complexity as anything He is supposed to have designed.  Given the theist’s assumption that complexity requires a designer, God’s own complexity implies that He also had a designer.  Either the theist is arguing for an infinite regress of God-designers and designers of God-designers, etc., or he is contradicting his own assumption that complexity requires design.  By using God as an “explanation” the theist is doing nothing more than explaining complexity (in living things) with complexity (God’s).  But this amounts to assuming what one is trying to explain, which is no explanation at all.  It just moves the mystery back a step.

This is the same logical flaw in using God to explain existence itself.  The theist often asks, “If you don’t believe in God, then how do you explain the existence of the universe?”  This question assumes that existence must be caused, and since the universe clearly exists, it too must be caused.  The theist then concludes that God must be that cause.  Now, presumably the theist supposes that God, like the universe, also exists, in which case the theist is right back to violating his own assumptions:  If God exists, and existence must be caused, then by the theist’s own assumption, God must be caused.  By using God as an “explanation” the theist is doing nothing more than explaining existence (the universe’s) with existence (God’s).  And just as before, this amounts to assuming what one is trying to explain.

Typically the theist’s reply to these criticisms is that God is the one exception: All complexity except God’s complexity must be explained, and all existence except God’s existence must be explained.  But this is blatant special pleading.  The theist is simply exempting himself from his own rules:  “Your explanation must meet these conditions; however, my explanation (God) does not.”  Of course, anyone can play this game.  Once could just as easily (and with considerably more parsimony) say all things except the universe as a whole require an explanation.

Even if one wanted to grant the theist his special exemption, other problems remain, which we can see by reviewing what it means to “explain.”  To say that something is “explained” means we’ve moved from the known to the unknown; it does not mean we moved from the unknown to the unknown.  Put simply, you cannot explain a mystery with a mystery.  If someone wants to use God to explain anything, then he would have to understand the mechanisms by which God causes something to happen, but since God is “supernatural,” then this mechanism is inherently mysterious and unknowable.   For example, to use God to “explain” complexity one would have to understand the nature of God’s supposedly uncaused complexity and the means by which it causes complexity in the natural world.  It will not do to say that such understanding is “beyond us,” forever unknowable to our limited intellects, etc.  Doing so would be no different than “explaining” rain by appealing to the mysterious properties of an unknowable rain god.  Explaining mysteries with other mysteries has all the explanatory power of saying, “It’s magic.”

We’ve seen that a supernatural God cannot serve as an explanation for anything, especially if God has the very properties His existence is supposed to explain, like the properties of complexity and existence.  But this is not the only fatal flaw in the Argument from Design.  William Paley made a rather beautiful, highly intuitive, and completely mistaken design argument when he asked us to imagine someone stumbling across a watch lying on the ground, and then carefully examining its mechanism.  He says this person would certainly observe that the watch’s

…several parts are framed and put together for a purpose, e.g., that they are so formed and adjusted as to produce motion, and that motion so regulated as to point out the hour of the day; that if the different parts had been differently shaped from what they are, of a different size from what they are, or placed after any other manner, or in any other order, than that in which they are placed, either no motion at all would have carried on in the machine, or none which would have answered the use that is now served by it…[Therefore] there must have existed…an artificer or artificers, who formed it for the purpose which we find it actually to answer; who comprehended its construction, and designed its use.

Paley then draws his eloquent and persuasive analogy:

…every indication of contrivance, every manifestation of design, which existed in the watch, exists in the works of nature; with the difference, on the side of nature, of being greater and more, and that in a degree which exceeds all computation.

One crucial problem with Paley’s argument is that it assumes humans determine whether or not something is designed by seeing if it has an accurate adjustment of parts—that is, if it shows complexity.  But this is certainly mistaken.  We know that something is designed not by its complexity, or even the degree to which it appears to serve a purpose, but by looking for ways in which it differs from nature.  In other words, nature is the benchmark against which we compare an object to see if it is designed.

For example, many naturally occurring rock fragments just happen to have a sharp edge that is well-suited for serving the purpose of chopping meat, though this does not lead us to believe that these fragments were designed.  Yet, we have found clearly manufactured prehistoric chopping and cutting stones that were designed.  How do we know they were designed and not just examples of fortuitous rock fractures?  Clearly it is not because they are sharp, since naturally occurring rocks are also sharp; and not because they are complex, since they have neither parts nor complexity; and not because they serve a purpose, since obviously random events can make a rock very sharp.  We know these stone hand axes were designed because they have markings on them that differ from what one would find in nature—that is, they have signs of manufacture.

Because the proper criterion for establishing design is difference from nature, and not complexity or apparent usefulness, we can know that something was designed even when it is both extremely simple and has no identifiable purpose at all.  This can be seen when we realize how easy it is to recognize as designed an unidentified simple part from an unidentified machine, such as an L-bracket, or a bit of wire insulation.  If Paley were right, it seems that this should be impossible, since neither purpose nor complexity is apparent.

These examples show that purpose and complexity are not the criteria we use in establishing design.  We know Paley’s watch was designed not because it is complex and/or serves a purpose, but because we recognize it as similar to other products of human manufacture, and / or because of its dissimilarity to anything naturally occurring.  So Paley had it wrong: we don’t know something is intelligently designed because it shows complexity; we know it is designed because it shows signs of manufacture, and the only way we know something is manufactured is by comparing it with nature or by having direct experience of its manufacture.  Now, if the criterion for determining design is comparison with nature, then it makes no sense to apply that criterion to nature itself since nature provides the very benchmark for making the comparison.  But this is precisely what Paley does in order to infer a super-designer.

Even if we ignore this criticism of Paley’s argument (which is decisive on its own), other problems still remain, not the least of which is the analogical form of the argument.  Paley was arguing by analogy.  In other words, since a watch shows adaptation of parts to an end and was designed and built by a designer, then by analogy we observe that nature also shows adaptation of parts to an “end,” and infer that it too must have been designed and built by a designer.  Unfortunately for Paley’s argument, if we consistently apply his analogical reasoning, then we would reach conclusions that he surely didn’t intend.  For example, our only experience with the relationship of designers to their highly complex handiwork, like watches and computers, is that there is a whole team of mortal, imperfect designers of limited power that work on one or very few complex creations—and not one immortal, perfect, and supernatural designer for all complex artifacts. So if Paley’s analogical argument were valid, we would have to conclude that living organisms were made by a team of mortal, non-supernatural, and imperfect creatures, all of whom are limited in their powers, and that different classes of complex things were made by different teams of mortal “gods.”

So far I have argued not only that a supernatural God cannot serve as an explanation for complexity or the existence of the universe, but also that His existence cannot be inferred from these.  So at this point, complexity, particularly the complexity of biological organisms and organs such as the human eye, remains unexplained.  If I were writing before Darwin’s time, I would close at this point with the comment that while we don’t know what the explanation is for the existence of complexity, we do know what the explanation is not—and it cannot be “God.”  Fortunately, however, we are writing after Darwin’s time.  For those readers not familiar with some of the basic tenets of Darwinism I would suggest reviewing the introductory article on this site, Evolution v. Creation, and then the article Evolution: Converging Lines of Evidence.

Often, the creationist will point to the genuinely awesome complexity of something like the human eye, or the bird’s wing, and argue that random chance could never have produced such engineering marvels. The creationist is absolutely correct.  To drive this point home, the creationist then quite correctly likens the odds of a structure like the eye forming by random chance to the odds of a tornado moving through a junkyard and producing a 747 passenger jet.  Again, this is an entirely appropriate analogy. The odds of chance alone producing something as exquisite as the human eye—or even single cell, for that matter—in one step is, for all practical purposes, zero.

Curiously, the evolutionist and the creationist are in complete agreement on this point. The creationist, however, seems to think that it somehow refutes evolution—much to the bewilderment of evolutionists. The reason for this is that the creationist has seriously misunderstood evolutionary theory.   Evolution’s major mechanism, natural selection, is the very opposite of random chance.  Natural selection is all about “cherry picking” and saving useful changes generation after generation.  All random chance provides is, in some sense, the “buffet” from which natural selection can select useful but slight changes.  The critical aspect of this process is that it accumulates these slight changes such that over time the accumulated change is not small at all.

The crucial point here is that there are many, many generations over which this cherry picking occurs, not just one.  To take an extremely simple example, imagine that I have a big box filled with one thousand pennies.  I shake up the box and pick out all the pennies that come up heads, which will be around five hundred (a fifty-fifty chance of coming up heads, so about half of the one thousand pennies): that’s generation one.  I then replace the missing pennies in the box and shake it up and again take out all the heads: that’s generation two.  It’s not too hard to see that in just a few “generations” I would have a pile of several thousand heads-up pennies.  The creationists’ Random Chance criticism applied to this example would amount to pointing to this huge pile of heads-up pennies and saying, “the odds of getting just one thousand pennies to come up heads is one out of a number so large it has three hundred zeros after it, and this evolutionist wants us to believe that with random chance alone he got not just one thousand heads-up pennies but several thousand!”

This imaginary critic’s calculation is absolutely correct. The odds of getting just one thousand heads in one try is so small that if we made a try every second for many billions of years it probably wouldn’t happen even once.  Yet we were able to get not just one thousand, but several thousand heads-up pennies in something like an hour.   Something is obviously wrong somewhere.  That somewhere is in the creationist’s assumption that evolution is supposed to occur in one try and with no selection, which really is analogous to a tornado going through a junkyard and producing a 747.  However, in our pennies example we had more than one try and we got our pile of heads not randomly, but by selecting them non-randomly (by cherry-picking the “good” ones) out of the random variation produced by shaking.  We then saved them up over the multiple tries – adding them to what was saved from previous generations.   Now, keep in mind that in nature, natural selection operates not over a few tries, but over many hundreds of thousands of tries (generations) and over millions, even billions, of years.

Of course, the variation produced by random chance in the pennies example is just heads or tails, and we decided that heads are “better” than tails, so only heads “survive” into the next generation.  Also, the heads-up pennies that make it into the next generation are not more likely to produce heads instead of tails—that is, they don’t reproduce by copying themselves in the way biological systems do.  This is why we won’t get anything complex out of the pennies example, though we do accumulate changes that dramatically “beat the odds.”  Natural selection in biology beats the odds in just this way, but it does so with heredity and a much, much larger range of possible variations than just heads and tails—variations that interact with each other to create many different orders of effects that also interact with each other—that is how biological complexity emerges.  How this process creates complexity is discussed elsewhere on this site, and is even demonstrated through our Freethought Debater version of Richard Dawkins’ Biomorphs program, so it won’t be repeated here.  The point of the preceding discussion is to show another fallacy underpinning the Argument from Design: the misuse of probability when criticizing the theory of evolution.

I have argued that there is no coherent way to use God as an explanation for the existence of complexity since, among other things, God himself is presumably complex.  Similarly, God cannot even be used to explain existence since God, when used as an explanation, presumably exists himself.  I have also argued that Paley’s Watch Argument mistakenly supposes that the existence of complexity and apparent purpose in an artifact are the criteria for establishing that it was designed, and it also makes an inappropriate analogical argument between man-made artifacts and nature.  Finally, I pointed out the most common misuse of probability calculations by theists in their attempts to prove the existence of God and/or attack the plausibility of evolution—namely, that of ignoring the fact that natural selection is not about the appearance of random, spontaneous complexity in one event, but about non-random cumulative selection, which occurs over many, many tries.


1Excellent discussions of the design argument can be found in Richard Dawkins, The Blind Watchmaker (New York: W.W. Norton, 1996), as well as George H. Smith’s, Atheism: The Case Against God (Amherst: Prometheus Books, 1989).  A more technical analysis of these arguments can be found in Michael Martin, Atheism: A Philosophical Justification (Philadelphia: Temple University Press, 1990).

A Response to Locke’s, “The Scientific Case Against Evolution”

A reader recently forwarded a link to Robert Locke’s short article entitled The Scientific Case Against Evolution.1The author, while claiming not to be a creationist, certainly writes in the curious even-evolutionists-now-realize style so common among creationist writers.  The article, while not offering new arguments (and not pretending to) does do a good job of succinctly summarizing the current repackaging of the standard creationist themes, so it seemed worthy of a point-by-point response.

Locke’s first claim is that there have always been “distinguished scientists” of “impeccable credentials” who don’t agree that evolution has been proved.  Strangely, Locke’s examples are Richard Owens and, curiously, Steven J. Gould.  Owens was indeed a distinguished scientist—back in the 19th century.  He entered medical school in 1824.  If the credibility of a scientific viewpoint in the 21st century can be established by citing the particular views of scientists living hundreds of years ago, then some very strange ideas would suddenly become “credible.”  Indeed, by reference to the past I could argue that many distinguished scientists refuse to accept the structure of the atom; and if I go back just a bit more I could even argue that many distinguished scientists refuse to accept the roundness of the Earth, or even the fact that the Earth is not the center of the universe.  Certainly, when one wants to give credibility to an idea by showing how mainstream its supporters are, one would do better by referring to the mainstream of modern science.  Darwin’s genius, like Copernicus’s, was not merely that his ideas were revolutionary in his time and supported by the evidence available to him, but that they were also overwhelmingly confirmed by future discoveries, and even by future sciences, becoming not only mainstream, but basic, introductory knowledge for students of the fields.

As for Steven J. Gould, I can do no better than to let him speak for himself:  “…it is infuriating to be quoted again and again by creationists—whether through design or stupidity, I do not know—as admitting the fossil record includes no transitional forms.”2

Of course, there are many, many examples of transitional forms, some of which Gould cites, and many others readily available from any number of academic sources.  It is important to recognize that this anti-evolutionist appeal to “lack of intermediates” utterly depends on there being no intermediates whatsoever.  If intermediates are found at all—even if there are not as many as some of the more extreme gradualist models might suggest—then the criticism is falsified.  As Gould writes,

The supposed lack of intermediary forms in the fossil record remains the fundamental canard of current antievolutionism.  Such transitional forms are sparse, to be sure, and for two sets of good reasons—geological (the gappiness of the fossil record) and biological (the episodic nature of evolutionary change, including patterns of punctuated equilibrium, and transition within small populations of limited geographical extent).  But paleontologists have discovered several superb examples of intermediary forms and sequences, more than enough to convince any fair-minded skeptic about the reality of life’s physical genealogy.3

Gould goes on to describe many of these examples, and elsewhere he even walks through the richly detailed step-by-step, gradual evolution of the whale as revealed by recent fossils4.  Gould’s “punctuated equilibrium” contribution is simply to point out that evolutionary change does not always proceed at a constant rate.   Nonetheless, numerous fossil sequences do show transitionals grading continuously between successive species within the same taxon, and crossing from one taxon into another.  The mammal-to-reptile transition is richly documented, as is the amphibian-to-reptile, and the fish-to-amphibian.  As for transitions between closely related species, one need go no further than the well-known intermediates between humans and our recent common ancestor with Great Apes (e.g. Homo Habilis, Homo Erectus, and the various Australopithecines), whose features are as intermediate as you can ever hope to imagine.  Indeed, it’s puzzling to see their intermediate nature simply denied by antievolutionists, whose rejection seems based on forcing discrete criteria onto what is essentially a continuum.  This tactic simply defines intermediates out of existence, meaning that, a priori, nothing would count as an intermediate.  It is as if I were to claim that color does not exist in a continuum, and when someone showed me something between, say, red and orange, I simply declared it to be in one category or the other (e.g., “It looks more red than orange, so it’s red”).

It is instructive to ask the antievolutionist in such situations for an example of what they would accept as an intermediate—even hypothetically.  Typically, they do one of the following: (1) describe something that is completely inconsistent with evolution, and that if it were actually found, would tend to disprove, rather than prove, evolution; and (2) describe a creature that is intermediate in every feature, which is also not expected under evolutionary theory.  For example, Lock’s comments on the lungfish suggest that he thinks intermediates should be intermediate in all characteristics.  This is simply false.  Intermediates consistent with the actual theory of evolution are precisely what we do find in abundance.

It is very important to understand the difference between evidence that shows that descent with modification occurred (that is, that all life is related somehow through common descent in a “tree of life”) and evidence for or against particular mechanisms and tempos by which it occurred.  I refer you to my article on evolution at Evolution: Converging Lines of Evidence for more on this critical distinction, which antievolutionists, including Locke, repeatedly ignore.  These antievolutionists repeatedly confuse the mainstream scientific criticisms of some theory of mode or tempo (e.g., Gould’s attack on steady-state gradualism) as an attack on evolution itself (i.e., descent with modification).  This is worse than false, and represents a serious misreading of fundamental evolutionary concepts.  Supposed “critics” of evolution (like Gould) slap their heads at this, and write pointed rebuttals like the one cited earlier, but they are simply ignored.  Contrary to what Locke cites Denton as saying, the fossil record has amassed far more evidence than is necessary to prove that all life is in fact related through common descent. None of this evidence puts descent with modification in doubt at all—quite the contrary: the number and range of transitional forms is now quite large and still growing.  But the proof that evolution has occurred goes far beyond the fossil record or any one line of evidence (see Evolution: Converging Lines of Evidence ).

Locke goes on to say, “The problem with this theory [punctuated equilibrium], which is too complex to go into in detail here, is that while it explains away the non-existence of small gradations, it still requires there to be large ones (the individual spurts) and even these aren’t in the record.”  It’s unclear what he means by “individual spurt” since individuals don’t evolve, populations do.  In any case, this is a rather egregious misreading of Gould.  As my earlier quote shows, Gould, and evolutionists in general, hardly believe in the “non-existence of small gradations.” Gould cites many of his own examples, which barely scratch the surface of the examples readily available.  And about his theory requiring large gradations, Gould goes on: “Continuing the distortion, several creationists have equated the theory of punctuated equilibrium with a caricature [that] major transitions are accomplished suddenly by means of ‘hopeful monsters.’”5  Gould then goes on to quote Duane Gish as claiming that Gould believes a reptile laid an egg from which a fully formed bird emerged, to which Gould remarks, “Any evolutionist who believed such nonsense would rightly be laughed off the intellectual stage.”6  Locke is not clear as to how large a “spurt” he is referring to, but the idea that Gould believes that significant evolutionary change occurred through such sudden jumps is simply false. (Perhaps Locke is misunderstanding the significance of the uncontroversial fact that some genes have a more dramatic effect on morphology than others, and these genes can also be affected by natural selection.)  Punctuated Equilibrium simply argues that the tempo of evolution is often relatively rapid when speciation is occurring, since it argues that speciation often occurs when small groups are isolated, with stasis being the rule otherwise.  But “rapid” or “sudden” here is used in the geological sense, not in a literal everyday “overnight” sense of the word.  By “rapid” Gould means tens of thousands of years.  But since a single species can be depositing fossils for many millions of years, the speciation period can represent considerably less than 1% of that time span.  And despite that, we still have many examples of transitional fossils.

Locke’s comments on computers reveal a misunderstanding of cladistics and neglect of a vast literature showing the incredible contributions to, and logical validation of, evolutionary theory that computers have provided.  The burgeoning fields of artificial life, genetic algorithms, and evolutionary programming exist only because natural selection and descent with modification are logically coherent concepts and describe highly creative processes. (If Darwinism were the tautology some creationists claim it to be, then one couldn’t simulate it in software and expect it to do anything.)  These sciences depend on creating a virtual world where neo-Darwinism can be instantiated.  This is done by creating the software equivalents of genes, random mutation, random genetic recombination, and environmental forces that affect the relative “fitness” of the resulting gene combinations.  Famous examples of these include “Tierra,” and “Polyworld.”   (In Tierra, parasitic life forms unexpectedly evolved to the utter astonishment of Tierra’s programmers.  Even more astonishing was the evolution of an immune response to these parasites and an ensuing “arms race” between these emergent species.)7  Such techniques are even now being used in industrial applications to allow evolution to “design” solutions to engineering problems, like coming up with efficient aircraft wing shapes.  These facts refute in a stroke all attacks on evolution based on supposed logical incoherence or meaninglessness (such as the circularity/tautology objection).  In my other evolution article, Evolution: Converging Lines of Evidence, I discuss cladistics in more detail, and show why Locke is quite mistaken when he claims that an evolutionary tree is only the result of “pattern craving” human minds.

Locke also states that, “…if different species have common ancestors, it would be reasonable to expect that similar structures in the different species be specified in similar ways in their DNA and develop in similar ways in their embryos; this is frequently not so.”  Without examples, it’s difficult to understand what he is asserting.  For example, all life shares a common ancestor—if you go back far enough.  If two species share a recent common ancestor, such as modern humans and chimpanzees do, then much is indeed “specified in similar ways,” and “develop in similar ways in their embryos.”   But where the common ancestor is much further back, the expectation naturally changes.  Indeed, the case of the evolution of the whale is one of a creature who can trace its ancestry onto land, and then, if you go back further still, back to the sea yet again.  Does this mean that its “fins” should be like a fish’s fins because all life shares a common ancestor?  The answer, of course, is no.  Locke’s comment here seems to reveal a misunderstanding of the significance of homologous and analogous morphologies in evolutionary theory.  Indeed, it is evolution that predicts and explains what we do in fact see in the natural world.  This subject is also covered in more detail in the previously mentioned article.

Locke’s comments on the horse are a bit vague as well.  He seems to be saying that the earliest horse and the modern horse are too similar to each other, and so showing a gradual transition between them doesn’t prove much, yet he seems to question the transitional evidence regardless.  Of course, the earliest “horse” is a little, terrier-sized forest dweller called Hyracotherium, which lived some 54 million years ago (mya).  Is he saying it is a horse in order to be able to declare that the transition evidence doesn’t prove much?  Whether you choose to call Hyracotherium a horse or not, presumably one still needs to explain the process by which it so dramatically changed over those 54mya.

When Locke says, “And even the emergence of one species from another has never been directly observed by science,” he reveals a serious misunderstanding of the scientific method itself—as it applies to all of science.  Surely, science is more than simply reporting what we directly observe; in fact, science is all about using what we can see to predict and describe what we cannot see.8  For example, no one has “observed” electrons orbiting the nucleus, or seen the inside of the sun, or even seen germs actually making a living human sick (we only observe the illness at work under a microscope in a lab).  Regardless, we can know these models describe reality when the models make successful predictions (e.g., “if the atom has structure x, then i, j, and k should be true—are they?”).  Locke’s is the type of criticism that, if accepted, would require our throwing out not just evolution but much of physics, astronomy, geology, and the other physical sciences, not to mention our rules of evidence in courts of law.  Indeed, though I would suspect Locke and others using this argument do not realize it, it is an anti-reason argument.  If direct observation were the only way we could be sure of anything, then our legal system would be in jeopardy, since the only evidence acceptable would be eyewitness testimony—no genetic, no fingerprints, no hair, no physical evidence whatsoever, since none of that bears on direct observation of the crime actually happening.  Clearly, there is something wrong with this line of criticism.  Yet, despite the fact that direct observation of an event is not necessary in order to know that an event really happened, speciation has been directly observed in the lab on multiple occasions.

Locke also brings up a classic of creationist argumentation:  the “can’t get there from here” charge.  This is illustrated by the “half of a wing,” or “half of an eye” example.  Sometimes this is called the “irreducible complexity” argument.  All forms of this argument are based on a serious confusion as to what evolutionary theory is actually saying.  First, it’s important to understand why half wings are not predicted by evolution.  In fact, if we ever did find evidence of a species with what amounted to modern wings but “chopped in half” (as if waiting to “evolve” the other half), then evolutionary theory would actually have a big problem on its hands.

Natural selection operates only in the immediate, local environment—it has no intelligence, no purpose, and no ability to see even one generation into the future.  Novel gene combinations, additions, or mutations, will spread in a population only if they increase the odds (even slightly) that the affected offspring will live long enough to make babies. This is just a mathematical property of the system:  Since offspring with these “helpful” genes are more likely to make babies, more babies in the next generation will have these genes—since they inherit their genes from their parents.  And the number of babies with these genes will keep increasing in each generation as long as these genes continue to be helpful.  (This lack of foresight in natural selection is also why evolution cannot “go back” and redesign something; it can only work with what is at hand.)

This means that useless partial stumps of would-be “future” wings, or anything else, would get in the way, as so be selected against.  So, when we do see useless appendages (like the vestigial leg bones inside some snakes), they represent the withering away of the creature’s past features that are no longer useful; they don’t represent the “embryo” of some future characteristic.  As a result, we expect that all life forms, ancient and modern (which includes what we call “transitionals”) to be generally well adapted to their environments.  It’s not as if today is the day that all species were working toward and are now finally “done” evolving, while in the past they were only “half way” there.  There is nothing special about now as opposed to the distant past or the distant future.  In the distant future, species that appear “complete” to us now will probably be described as transitional by future paleontologists—they might even be talking about our species.

So what about birds and their wings?  How can they have been adapted to their environments in the past if they couldn’t even fly?  The answer is that the ancestors of birds were adapted to a completely different environmental niche than their modern-day descendents, one that had nothing to do with flying.  Characteristics that would later support flight, like feathers, originally appeared for a completely unrelated reason, perhaps for something like insulation—a purpose that had immediate benefit.  Remember, potential future usefulness doesn’t count.  Basically, a new incipient structure can arise (a) because the genes that produce it are being selected for a completely different reason—generating the incipient structure like a side effect, which is then able serve an unrelated purpose of its own; (b) because it directly intensifies some ability or function; or (c), because the structure was used for one thing but is now being used for something else, which makes it subject to completely different selective forces.  Note that this predicts that creatures should be readily found that are inefficient in their designs, which is easy enough to do (look at a Panda’s “thumb” or the bizarre embryological processes that I cite in my other article).

To clarify this very important point, let’s use an analogy:  Think of the hypothetical “evolution” of a brick arch-bridge.  First, there’s no need for a bridge, there is just a pathway.  But farms on each side of the pathway want to keep their cows in, so they put up fences.  A mild earthquake creates a small crack, but people can still jump over it easily enough.  But the crack widens over time, so some fence planks that are no longer needed are taken down and laid across the crack.  As the crack widens, the planks begins to sag, so scattered bricks that have fallen from passing wheelbarrows are collected and stacked in columns beneath it for support.  Initially, just a few bricks in single column are sufficient, but over time the crack continues to widen and deepen. As a result, gradually more and more stacks of bricks are added so that the supports eventually form a wall, like a dam.  Eventually, increasingly heavy water flow through this ever deepening fissure knocks out some of the lower bricks; however, the bridge doesn’t need to be repaired because the upper bricks that used to be supported by these lower ones have now become equally well supported sideways by their ever-pressing neighbors.  The lower bricks from the center of the span are allowed to wash away leaving an arch connecting either side of the creek.  A creationist then comes along and points out that this must have been designed and built in one event as an arch bridge; after all, an arch bridge is “irreducibly complex”: what good is half and arch? Indeed, what could have held up half an arch?  Well, in this case there never was half an arch. There were intermediate structures that served other purposes and worked well enough in their day.  At each point in its history the bridge structure was “complete.” At each step minor steps were taken for immediate needs using only the materials that were on hand at the time.  And at no point was there a grand design, or an end game in mind.

This crude analogy shows how evolution can inadvertently create what amounts to “scaffolding,” which can disappear when it is no longer needed, leaving a free-standing structure that would collapse if any of its current parts were missing—creating what appears to be an irreducibly complex structure.  This would apply not only to morphological structures, but to chemical processes as well.  This explanation is not revolutionary, and can be see at work in the fossil record in cases such as the formation of mammalian ear bones from parts of the reptilian jaw.

Locke’s comments regarding the findings from microbiology (i.e., protein comparisons) are difficult to make sense of, unless he misunderstands what he is describing.  Surely, Locke understands that all proteins, and by extension all genes, are not expected to diverge equally in proportion to evolutionary distance.  The degree to which different proteins can diverge is a function of the nature of the protein and its role.  Some proteins can fulfill their functions with a wide variety of amino acid substitutions (which trace back to gene substitutions), others cannot.  Those that cannot would not be expected to show much change at all, even in “deep time,” and their conservation over time (and therefore across widely disparate species) is expected; it is a result of what is called “purifying selection.”  However, those proteins that are functionally unconstrained do accumulate differences, and do so in proportion to their evolutionary distance—it is this fact that is explained only by evolution:  Why would functional, unconstrained proteins diverge only in proportion to evolutionary distance?  The evidence from microbiology here is truly vast, and conclusive on its own, and I mention some of the most powerful of it in my other article.

The “can’t get the ball rolling” argument that Lock makes is, of course, irrelevant in a very important sense:  The evidence that life shares a common ancestor is completely independent of any theory as to how the root of that tree of life “got started.”  Indeed, many evolutionists are Christians, or other types of theists, and believe that God got the ball rolling.  Nonetheless, there are very plausible accounts (Nobel Laureate Christian de Duve has an interesting one in his book, Vital Dust9), though not enough evidence yet exists to conclusively support one plausible path over another.  Telling experiments, such as Stanley Miller’s, hint at pieces of the answer by showing, for example, how natural inorganic processes can form the amino acid building blocks of life—spontaneously.  No specific path may ever be found, but the evidence that evolution occurred is quite independent of that question.  We can know that it happened without knowing all the detail of how it happened, and the evidence that it happened is overwhelming and conclusive.

Locke’s call for intermediates is also quite misplaced here.  Archaic bacteria have been found and their genetic differences from the rest of the living world are as profound as one would expect under the evolutionary model.  What’s more is that these ancient life forms find oxygen utterly poisonous.  We know from geological evidence that huge periods of our past were oxygen free.  This fits well with the idea that an oxygen-rich atmosphere would have destroyed that early life and any earlier “proto-life.”  When these early bacteria multiplied beyond a certain point, their own waste – oxygen – killed most of them off from everywhere except their current exotic, oxygen-free environments; others not only learned to tolerate oxygen, but also got to a point where they needed it.  The bottom line is that the world is a very different place now than it was then.  The presence of oxygen would be highly corrosive to the kinds of complex molecules that might represent the intermediates between life and non-life, dissolving them almost immediately.  As in our arch-bridge example, the “scaffolding” that such prebiotic intermediates represent may well have long since washed away, and the world is now far too poisonous (oxygen rich) a place for them to ever appear again.  All we may be able to do now is infer their existence.  But such inferences can be as conclusive as the inferences that tell us of the structure of the atom, or the guilt of a criminal in a court of law.

Locke also makes a passing reference to the “odds” of any of it happening by “chance.”  When he uses the phrase, “too complex to have been thrown together by any known non-living chemical event,” he demonstrates a common misapplication of probability theory to this subject.  For a discussion of the flaws in such misapplications of probability, I refer you to my article on Complexity, Probability, and God.

Locke also mentions Karl Popper.  I can only assume Locke is unaware that Popper’s critique was never of descent with modification, but of natural selection, and he admitted that he was mistaken even in that once he learned a bit more about the subject.  Popper has always said that Darwin’s core thesis—that all life is related through descent with modification—was not only testable, but was the most successful explanation of all the data.  Indeed, he considered Descent with Modification to be “historical fact.”10  Locke’s sources are seriously misinformed.  Regarding natural selection, Popper retracted his tautology charge, pointing out that natural selection “can be so formulated as to be far from tautological.”11  Popper was of course right that some people do come up with untestable natural selection stories (Gould called these “just-so” stories) about how some physical feature or other came to be.  Misapplication, or untestable speculation, based on the concepts of an otherwise proven theory is not evidence against the theory anymore than making untestable speculations about how some physical event may have happened, amounts to evidence against physics.  To suppose that is does is to suppose these speculations provide the evidence for the theory, which of course, they do not.

Ironically, it is creationism that fails Popper’s falsifiability criteria, not evolution.  In my evolution article I make clear how evolution (descent with modification) is repeatedly falsifiable and yet resoundingly passes these tests.  Creationism, however, isn’t even a theory. (As someone once put it, “It’s not even wrong.”)  It seems little more than a set of mutually contradictory and ill-informed attacks on evolution.  Stripped of this negative content, it has no substantive positive content of its own beyond saying, “things are the way they are because God must have wanted it that way.”  What does that predict?  How does a scientist use such a “theory” to gain additional insights in the lab or in the field? What could conceivably count as evidence against it?

I cannot bring myself to let Locke’s analogy between evolution and Newtonian physics get by.  Certainly, the evidence that supported the validity of Newtonian physics was vast and convergent, just as it is with evolution.  It is also true that Newtonian physics was later discovered to be a special case of a more general theory of physics, called relativity.  But this in no way suggests that Newtonian physics was falsified.  What it does mean is that Newton’s formulas and theories remain true over the domain of problems to which the theory is applied, and while he imagined that it would work in even the most extreme circumstances, it turned out other factors become more dominant in those extremes.  Such advancement is what science is all about: building on, rather than erasing, the work of others.  Indeed, we use Newtonian mechanics today, and not relativity, to put people on the moon, to design our airplanes, our cars, our satellites, and our buildings.  If evolution were found to be a special case of a broader theory in the same way that Newtonian mechanics was, then it would not be much of a cause for antievolutionist celebration.

I have to conclude that Locke’s “Scientific Case” not only fails to make its case, but is not very scientific.  His claim of a “major trend” in biology against evolution would come as a surprise to biologists the world over, and his evidence to support this claim is either hundreds of years old, or is based on rather gross (though, I’m sure, unintentional) misrepresentations, of both science and individual scientists.  It brings up old, unoriginal arguments that have long-since been discredited—arguments based on misrepresenting scientists’ views, misrepresenting evolutionary science, and even misrepresenting the scientific method itself.   Indeed, my Gould quotes date back to the mid 1980’s (Popper’s retraction dates back to the 1970’s) and yet almost a quarter century later we continue to see antievolutionist / creationist writings misquoting and misrepresenting his work in exactly the same way they did back then, without even a passing acknowledgement of Gould’s repeated and forceful refutations of these same misrepresentations of his hard work.  (Even if Locke is not a creationist or “antievolutionist,” he uses and draws upon sources that use their discredited arguments and misrepresentations.)  Such is yet another reason why creationism is not considered science at all.


1 Robert Lock, “The Scientific Case Against Evolution,” 5/2/2004,  <http://www.godandscience.org/evolution/locke.html>

2 Steven J. Gould, “Evolution as Fact and Theory,” in Science and Creationism, ed. Ashely Montegu (New York: Oxford University Press, 1984), p. 124.

3 ibid.

4 Steven J. Gould, Dinosaur in a Haystack: Reflections in Natural History (New York: Random House, 1995), pp. 359-376.

5 Gould, “Evolution as Fact and Theory,” p. 124.

6 ibid.

7 A good introduction to the field is Steven Levy, Artificial Life: A Report from the Frontier Where Computers Meet Biology (New York: Random House, 1992).

8 Excellent discussions of this point in Philip Kitcher, Abusing Science: The Case Against Creationism (Cambridge: MIT Press, 1993).

9 Christian DeDuve, Vital Dust: The Origin and Evolution of Life on Earth (Basic Books: New York, 1995)

10 Quoted in Robert T. Pennock, Tower of Babel: The Evidence against the New Creationism (Cambridge: MIT Press, 1999) p. 100.