The debate between faith and reason is in many ways the decisive battleground in the debate between theism and atheism. This is because most defenses of theism appeal to the inadequacy of reason. Typically these defenses will take the form of claiming that there are appropriate spheres for reason, and appropriate spheres for faith, and that belief in God comes from recognizing the appropriate role for faith and the associated “limitation” of reason. Some theists argue that one can believe in God using both faith and reason. Once again, we should define our terms.1Faith means that one considers a particular claim (e.g., “God exists”) to be actual knowledge, absolutely certain knowledge. This claim to certainty is held in the absence of adequate evidence, or in direct contradiction to the evidence. Evidence is considered relevant only in so far as it supports the proposition; and irrelevant or inadequate to the extent that it does not support the proposition.
“Faith” has multiple usages, and often in debates the meaning shifts. For example, a theist might state that an atheist has “faith” too. For example, the atheist has “faith” that the sun will come up tomorrow or that the airplane one is about to get into won’t go down in flames. Clearly, this is not the same sense of the word that theists use when they say that they have “faith” that God exists. For example, one can be virtually certain that the sun will come up tomorrow, and this comes from evidence analogous to a repeatable experiment: everyday the sun has come up. Of course, it is not certain; an unanticipated event like the sun exploding could force us to revisit our expectations. The airplane example is yet another case of reasonable expectations based on historical evidence, and the (fortunately rare) exceptions are clear reasons why we can never be truly certain when boarding a plane that it in fact won’t go down. The theist, however, is absolutely certain that God exists, absolutely certain that no future evidence will appear that would change his or her mind.
“Reason” means the application of logical principles to the available evidence. While the principles of reason / logic are certain, the conclusions one obtains from them are only as certain as the underlying assumptions, which is why science is rarely, if ever, absolutely certain (though in many cases, its theories are certain to a very high degree of probability). In fact, scientific theories are rarely “deduced,” but are, instead, “inferred”; that is, they are based on inductive logic, or generalizing from specific examples. The “inferred” theory, if it is any good, will make independently testable predictions, and will explain a range of phenomena that had seemed unrelated before. When multiple, independent tests corroborate a theory, it can, just from a statistical standpoint, become virtually certain. 2
The critical point here is that while almost nothing is certain, everything is not equally uncertain. Our theories can be ranked by the evidence supporting them, and our degree of “belief” should be similarly ranked; that is, we “believe” in proportion to the evidence—all the way from “completely unsubstantiated” to “some possibility” to “virtually certain.” Compare, for example, the theory that leprechauns really do exist with the Germ Theory of Disease. Neither one is certain, but one is far closer to being certain than the other.
I stated that the principles of logic are “certain.” This touches on a particularly important part of the faith vs. reason debate. Often, the advocate of faith will say, “But you can’t prove the truth of logic, so you must have “faith” in it—just as I have faith in God.” This critique of reason brings to mind the story of the child who keeps asking “why?” to every answer offered by the parent. Of course, this infinite regress of cause and effect cannot go on forever. To understand when to stop asking “why?” is to begin to understand the nature of concepts. Concepts do not exist in a vacuum. With one class of exceptions, concepts derive their meaning from some immediately ancestral set of concepts and can retain their meaning only within that context. You hit “bedrock” when you reach the so-called axiomatic concepts, which are irreducible, primary facts of reality—our “percepts.” These percepts form the foundation upon which we build our concepts. How do you know when you’ve finally hit these primary facts of reality in the long string of why’s? You know—and this is critically important—when there is no way to deny them, or even to question them, without presupposing that they are, in fact, true. To deny them or to even question whether they are true is to literally utter a contradiction.
This “bedrock” test is very specific. Let’s illustrate it with an example. Suppose I say, “Logic is an arbitrary human invention and could be wrong.” Well, if it is wrong, then the Law of Contradiction (a thing cannot be itself and its negation at the same time and in the same respect) and the related Law of Identity (a thing is itself) are wrong; but then that means the very words that make up my original claim, such as, “Logic is arbitrary” could mean “Logic is not arbitrary” or it could mean both at the same time and in the same respect. In fact, it could mean “I like chunky peanut butter.” If all that sounds crazy and unintelligible, that’s because it is, as are all utterances when the truth of logical principles cannot be assumed. The point here is that without the assumed truth of logic, language itself becomes impossible. So the contradiction is this: For my original statement to have any meaning at all, logic has to be true, but the content of my original statement questions that truth: a self-contradiction. Logic, then, is not accepted on “faith” but as a necessary, self-evident truth, something that is required to speak or think at all. The same can be shown for the concepts of existence, consciousness, and the reliability of our senses. Again, there is no way to talk about any of these things being possibly untrue without first requiring them (implicitly) to be necessarily true.
In life one is exposed to claim after claim (Aliens, Heaven’s Gate, Pyramid Power, ESP, etc). What criteria should we apply to separate claims that correspond better with reality from others that do not? To use an earlier example, how do we decide that the Leprechaun theory should not be taken just as seriously as the Germ Theory of Disease? The answer is that we know by applying the standard of reason. If faith were a viable alternative to reason, then what are its rules? How do we know when to apply it? How do we know when someone has misapplied it? How can we tell the difference between the effects of faith and the effects of inadvertent, though well-meaning, self-delusion? Indeed, how can we test its validity?
Let’s illustrate this problem. A member of Christian sect X believes that all other sects are damned, and she says that she knows this through faith. The person she is talking to is a member of sect Y that believes only sect Y is the one true faith, and that all others are damned, including members of sect X—and, of course, she knows this through faith. Clearly they both cannot be right. The member of sect Y asks the member of sect X how she knows that she is not really just hearing the deceitful voice of Satan leading her down a false path. To that our sect X member confidently replies, “I know that through faith as well.” Not surprisingly, these are the same answers given by the member of sect Y to exactly the same questions regarding her confidence in the truth of her favorite sect. There is no independently validated method to resolve this. If reason is not the standard, then there literally is no standard, and people who abandon it have simply written themselves a blank check to believe whatever they choose. Cloaking this irrationalism with comfortable terms like “faith” does not make it any less irrational. As John A. T. Robinson once put it: “The only alternatives to thinking with reason are thinking unreasonably and not thinking.” 3
1 George H. Smith, Atheism: The Case Against God (Amherst: Prometheus Books, 1989), gives an excellent introduction to this critical subject. I draw from Smith both here and below, in my discussion of axiomatic concepts, and Smith is drawing from the Objectivist epistemology of Ayn Rand.
2 Philosopher of science, Philip Kitcher, in his Abusing Science: The Case Against Creationism (Cambridge: The MIT Press, 1993), from which I am drawing these points, gives an outstanding introduction to the methodology of science.
3 Quoted in Smith, op. cit. p. 110.