In our other article, Does Morality Depend on God?, we critiqued God-dependent metaethics. Nothing had been said of the merits of any particular non-theistic metaethical theory, and nothing needs to be said to make that critique stand. After all, God-dependent metaethics doesn’t somehow become true by default. However, much more can be said for a reason-based, non-theistic approach than that it is no worsethan theistic-based approaches.
Needless to say there is hardly consensus among non-theists as to the best metaethical theory, but one shouldn’t fall into the trap of thinking that if one isn’t sure what the right answer is, then one cannot identify a wrong answer or a non-answer when he or she sees one. The discussion in the article mentioned above, I believe, demonstrates the fatal flaws of a theistic-dependent approach, which eliminates it as a candidate in our search for a viable metaethical theory. That being said, I do, of course, have a view (which is influenced by writers such as Daniel C. Dennett, Michael Martin, Richard Dawkins, Kai Nielsen, Ayn Rand, and especially Michael Ruse).
I want to be clear at the outset that I will argue that there is no such thing as universal moral truths that exist independently of humans, though I will argue that objective morals exist. The notion of morality presumes a context of self-aware beings with behavioral choices to make, and to date the only self-aware beings for which we have any knowledge are humans, so moral concepts have meaning only within the human context, and lose that meaning outside of that context. But as I’ll show, this is not equivalent to moral relativism. Certain “core” moral truths can be shown to be just as objective as are certain truths about, for example, what is and is not poisonous for humans to eat. I’ll argue that moral truths are objective in the sense that we cannot arbitrarily choose what is and isn’t right for us, just as we cannot arbitrarily choose what is and is not poisonous to us. But also like poisonous food truths, moral truths are contingent truths: they do not exist outside of the human context in some disembodied or independent sense; they, like our physical characteristics, are the result of our very contingent evolutionary history, a history that could have gone down innumerable paths but just happened to go down the one on which we find ourselves. What is and is not poisonous to us is a contingent fact of history. It could have been different than it is, because we could have evolved differently than we did. Precisely the same can be said for what is morally right and wrong for us.
Now, keep in mind that I have not yet attempted to explain what I mean by the word “morality.” Up to this point I have only been trying to lay the framework in which it will be defined. As we’ve seen in the previous article, theist statements such as, “Morality is a mark of His eternal natural,” can explain or justify nothing because of the tautological sense in which the word “morality” is being used. But if I say that morality derives its meaning only in a human context, am I not being similarly tautological? That depends on how I define morality. If I say for example, that “good” is any and everything humans do, no matter what, then it is just as tautological as in the theistic case. Indeed, this is how theists typically define good with respect to God. But, if I say that “good” is that subset of real and possible human behaviors that promote the survival of humanity (merely an example at this point), then it is not tautological. In other words, in such a case we have an independent criterion against which to evaluate whether a given human behavior is good or not. Of course, it still remains to address the question of why one should consider any particular behavior (even if it does promote the well-being of humanity) as one a human ought to pursue, rather than just a description of what a possible human behavior is along with its likely effects.
My own view is that both ethics and metaethics can be approached from a Darwinian standpoint. 1 An approach based on a Darwinian or other naturalistic approach can fundamentally do no more than describe what is the case. It cannot describe what ought to be the case—at least not without importing some additional premise that defines “good.” When one argues, for example, how certain behavioral tendencies can be explained in Darwinian terms, this should not be confused with arguing that these same behaviors are what ought to be exhibited. To take one simple example, our craving for fat and our delayed sense of feeling full can be easily understood in Darwinian terms: it increases our chances of survival in environments where the next meal is far from certain (which describes most of hominid and human history), but our capacity for reason allows us to see that this same craving is more harmful than helpful to our survival in a modern, wealthy, industrialized society.
Knowing when a truth is self-evident
In what follows I take what I consider to be an objectivist approach when I argue that the is/ought boundary can be bridged at one particular point. 2 That point is where we identify very few, perhaps just one, self-evident axiom. These axioms I argue amount to self-evident, direct observations—so-called “percepts.” For example, some immediate, self-evident percepts regarding humans are that it is normal for humans to have language, to have two eyes rather than some other number, and to walk upright on two legs rather than, say, ten legs. Obviously challenging the correctness of such observations would strike most of us as irrational, as well it should. Nonetheless, bear with me a while on a short but important digression as we examine exactly why challenging such observations is irrational. This will be very relevant to our subsequent discussion of moral objectivity.
Attempting to argue along the lines of, “Who’s to say two legs are normal?” or ” How do you know that forty legs are not the normal human condition?” ignores direct, systematic observation and by so doing implicitly assumes that we fundamentally cannot trust our senses or our ability to systematically observe reality. When one does that, one is committing to a self contradictory position. Why would it be self-contradictory to argue that two legs is not the normal human condition? After all, it is certainly not a logical contradiction—at least in the sense that there are possible worlds in which humans are like us in all ways except for the fact that they normally have some other number of legs. However, making such a claim in our world forces one to question what our senses are telling us—not just in a particular case like when we see a mirage, but in general, since observing that humans normally have two legs is a systematically confirmed, intersubjective, direct observation. Questioning its veracity amounts to arguing that our senses cannot be trusted to track an outside reality at all. Certainly we may be wrong in any particular direct observation, but the way we show that we were wrong in one particular case is by further appeal to our senses, by saying something like, “Look: as you can see from x, y, and z, you must have been mistaken.”
The point here is that concepts do not exist in a vacuum. With one class of exceptions, concepts derive their meaning from some immediately ancestral set of concepts and retain their meaning only within that context. You hit “bedrock” when you reach the so-called “axiomatic” concepts, which are irreducible, primary facts of reality—our “percepts.” These percepts form the foundation upon which we build all other concepts. As soon as you leave the context for a particular concept, that concept loses its meaning. For example, if you try to talk about what happened “before” time existed you would be talking gibberish, since the concept “before” presumes the context of running time, where things come before or after other things. As another example, if you try to talk about something existing outside of the universe (understood as “everything”), then you are again being incoherent: If it exists, it is part of the universe; existence presumes a universe. Similarly, if you want the word “morals” to have meaning, you have to stay within the context that gives it meaning: humanity. It becomes meaningless to talk about morals outside of that context.
In any given discussion, the concepts upon which the ones you are using depend are your presumptions. For example, as soon as you make an argument of any kind you make a number of unavoidable presumptions, some of which include the following: that the listener can reliably perceive your argument with his senses (otherwise, why would you even be speaking), that the rules of logic hold (otherwise, what could you possibly mean by an “argument”), and that there is a reality outside of your own imagination (otherwise, who are you talking to). As soon as you make any kind of argument challenging those assumptions, you are necessarily contradicting yourself, since in making that argument (or any other) you are depending on the truth of the very assumptions you are challenging.
This very valid objectivist argument comes up again and again in various contexts. For example, some argue that we can never be certain of anything, but in making such an argument one is presuming that we can be certain of that conclusion, and the argument that led to it, and the accurate perception of that argument, etc., etc., all of which contradicts the argument itself—that we can be certain of nothing. These “automatic” contradictions show that such things as logic, the possibility of certainty, the existence of an objective external reality, and consciousness, among others, are all self-evident truths. They are not arbitrarily accepted as true, nor are they taken on faith. Their truth is directly perceived with no way to coherently challenge them.
The reader may be wondering at this point what any of this has to do with morality, but as we’ll see below, this method of recognizing a self-evident, axiomatic truth will be used to identify a self-evident, axiomatic moral truth. A truth that, like our preceding examples, is impossible to question without being self-contradictory.
Natural Selection and Game Theory
With this in mind, let’s see if we can get a handle on possible human moral axioms that are on a par with human physical axioms (like the fact that humans normally have two arms). By applying the concept of natural selection to population genetics, we can see some interesting things that are of direct relevance to morality.
A naive view of natural selection would have us believe that the only behaviors that will be selected for are those of extreme individual selfishness. But this is clearly mistaken since such behaviors are not always the best way to ensure reproductive advantage for one’s genes. There are two other advantageous strategies: kin selection, and reciprocal altruism.
As many people are aware, there are many life forms, like bees and ants, that on an individual basis sacrifice themselves for the benefit of others (soldier ants, etc.). Interestingly, these same kinds of behaviors are described as “altruistic” when humans do them. Why would nature evolve other “lower” life forms that are programmed to do this? While on the surface these behaviors may appear to contradict “survival of the fittest,” we now understand that what evolution really “cares” about is maximizing the number of copies of one’s genes in the gene pool. (I’m not saying this is “good,” only that it simply is.) This gene selection may or may not coincide with the immediate interests of any one individual. As it turns out, an individual can sometimes do more to increase copies of its genes in the next generation by helping a group of relatives to reproduce, even if doing so prevents that individual from reproducing at all. For example, given that your siblings share 50% of your genes, it would be more reproductively advantageous for you to be willing to die without reproducing in order to save three of your siblings so that they can reproduce. 3 Therefore, genes contributing to this type of altruistic behavior will tend to spread in a population. This same kind of analysis suggests that the more closely related two individuals are, the more risk one will be willing to take on behalf of the other, not just in terms of defending or saving the other individual, but also in terms of resources expended on that individual (like food)—resources that could have been spent on oneself, thereby increasing the odds of one’s own survival. This approach seems to explain, for example, the cross-cultural, universal tendency to help close relatives without expectation of return.
Does this mean you would expect to see no cooperation between non-relatives? No. That’s where “reciprocal altruism” comes in. Reciprocal altruism describes cooperative strategies that emerge in game theory. These strategies describe scenarios in which mutual cooperation is better for an individual, or a gene—on average—than is non-cooperation. Applied to natural selection, game theory shows how certain behavioral strategies can emerge, spread, and then stabilize as the normal, dominant behavior in a population, meaning that any drift away from that behavior will be selected against. Even the impulse to help completely unrelated strangers—if the cost / risk is not too high—can be understood in these terms. For example, even if two individuals are completely unrelated, there is some, perhaps very small, chance that circumstances will arise where a favor granted can, at some future date, be returned. The fact that this chance is small is why the risk in helping in the first place must be correspondingly low. Evolution is essentially matching risk and reward. Now, it is critically important here to keep in mind that evolution doesn’t make ants and bees—or even humans—conscious of these cost-benefit calculations. Bees certainly aren’t conscious of them, and humans are rarely if ever conscious of them. What evolution produces is either direct programming or indirect, subconscious ”urges” to behave as if those calculations were being made.
Importantly, we see through this approach not only how such tendencies can emerge, but why they spread and become the norm in the species— something considered “universally” or “absolutely” good, though as we’ll see it may or may not be actually good. However, while showing, for example, why humans almost universal think that preventing senseless murder is good, game theory and natural selection also show that there will always be exceptions— “cheaters” who deceive to gain resources at the expense of the community, and remorseless sadists who’s genetic defects, if you will, cause them to enjoy causing suffering in others. There will always be some number of such people because of the genetic variations that are always cropping up, and also because a small number of cheaters among a majority of non-cheaters is a more stable evolutionary strategy than is 100% non-cheaters. Fortunately, this same analysis shows that while there will always be such people, they will be held in the minority among humanity.
Now, ants and bees are amoral despite their self-sacrifice. Why? Because they do not consciously weigh choices. Indeed, they don’t have choices. Their genes directly control their behaviors in an inflexible, stimulus-response fashion. They behave as if they were moral. In the case of humans, however, evolution happened upon an interesting tradeoff: giving up a whole lot of hard-wired responses in order to gain the benefits of dynamic, flexible responses in the context of a self-aware consciousness–effectively allowing for a much larger range of responses. In place of hard-wiring, evolution produced a “softer,” more indirect determinism: urges, intuition, and powerful moral sentiments. Rather than forcing specific behaviors, this softer determinism motivates behaviors through powerful dispositions and what we could characterize as a compelling moral sense. This moral sense creates powerful pressures not only for action, but also for restraint in the face of our other competing urges and desires.
Observed Data: The core set of human morals
We’ve see up to this point how a naturalistic approach can at least explain the origin of the full range of what are generally considered to be morally “good” behaviors, from simple kindness to strangers, to full-blown, self-sacrificial altruism. It seems that if we got a room filled with randomly selected atheists, Christians, and, say, animist hunter-gatherers from some prehistoric community, we would find wide agreement among them in calling many things “evil”: betraying the trust of friends for personal profit, torturing children for fun, putting one’s individual needs over the needs of the community, etc.; and we’d find similarly wide agreement in calling their opposites “good.” Of course, we’d also find wide disagreements outside of this small core set of values on such issues as, for example, the proper family structure, allowable sexual relations and roles, whether or not certain behaviors should be considered selfish (related perhaps to concepts of property), etc. Notice, however, that the areas of moral agreement have a very local focus—a focus on the immediate community or tribe.
When some thinkers compare the ethics of different societies and conclude that there are no universally shared norms, this is because they apply a common scope to the morals of each society, a scope that is too broad. It is only when one shrinks the scope of each group’s morals to its own local community that the core set of values become visible. Why? Because evolution only operates locally.
For example, Will Durant, a historian of philosophy from early in the twentieth century, criticized the notion of an innate moral sense when he observed that
A militant society exalts certain virtues and condones what other peoples might call crimes; aggression and robbery and treachery are not so unequivocally denounced among peoples accustomed to them by war, as among peoples who have learned the value of honesty and non-aggression through industry and peace. 4
The point is certainly well-taken. One needs only remember all the historical examples of predatory, marauding “barbarians” that raided agriculture peoples. These groups often showed little mercy for their victims, treating them little better than the animals they hunted for food. But if one compared each of these society’s local ethics, their ethical values toward members of their own communities, then one would find that universal consistency in core values. Even a marauding, vicious tribe of predatory barbarians would not last long if their young weren’t nurtured and protected; if they didn’t value behaviors like bravery and trustworthiness that put the tribe’s interests ahead of purely individual interests. It is this set of “short range” ethics that form the common denominator to human morals, and it is short-range precisely because evolution operates, for better or worse, only in the short range. (In case you’re alarmed at this point, you will see below that I argue long range, global, objective morals do indeed exist, and can be rationally determined.)
But why is there a global, common, recognizable, core set of values at all? Why shouldn’t the global pattern of human morals look like the global pattern of, say, conceptions of gods, or clothes styles? Unlike the case of morals, the global distribution of conceptions of gods shows no “center of gravity,” no universal core that is stable across all cultures and over the span of human history. But core moral values do have this property, why? Well, in Darwinian terms the reason is that this core set directly serves the survival goal of genes in human beings, which is to say human survival. These core moral values are so directly and immediately connected to that goal that evolution has been able to hard-wire them. We feel the effects of this hard-wiring when we feel powerful moral imperatives; ones that feel for all the world that we are somehow attuned to an “outside,” independent, objective moral reality. Of course, our feeling that way is precisely why those particular moral sentiments work and can be subject to natural selection.
It is important to bear in mind that evolution doesn’t “care” whether or not we have a completely accurate perception of “ultimate” reality. For example, modern physics has shown us just how wrong our hard-wired perceptions of time and space can be when dealing with the very small and the very fast. Evolution only cares that our perceptions increase our genes’ chances of survival. So, with regard to morals, evolution doesn’t make us somehow consciously aware that betraying our friends will reduce our chances of survival; but it does create in us a powerful sense that we don’t want to do it—that it is somehow disgusting and repulsive. Similarly, evolution makes us feel compelled to do certain things like help a relative, or even take extreme personal risk to protect the community – that it is just the “right” and “natural” thing to do; and we are moved by and admire that behavior in others. From evolution’s perspective, that “intuition” is sufficient.
Outside of this core set of values, however, the relationship of other more peripheral values to our survival is not as direct, and so evolution cannot act on them directly, though it can act on our susceptibility to them. Many of our human choices are made within a complex social and cultural context. A particular behavior in one such context can enhance gene survival, while in another hinder it. Yet identifying, learning, and absorbing these non-core, community-specific behaviors can also be very important to gene survival. But since social and cultural contexts are complex, emergent, and highly variable over time and geography, evolution cannot hard-wire them. Instead, it has made us highly receptive, especially in childhood, to so absorbing our local cultural norms and customs that they become imprinted and we find them almost impossible to shed, even when reason later shows us how harmful, or pointless, some of those values and norms may have become.
The moral axiom
But what does all this tell us? Really, all we have is an explanation of why some behavioral tendencies are more widespread in humans than others and why some values can come to be considered universal moral “truths” across history and across cultures. We haven’t said why such universal, core values ought to be considered good, only that they are and why they are.
We certainly don’t want to fall into the trap of thinking that just because the majority of humans share certain values, that those values are therefore “good.” After all, behaviors and values don’t become morally good simply because the majority believes that they are. But that is not what this Darwinian/game theory analysis suggests. On the contrary, this naturalistic approach shows that universal moral truths became universal not because the majority thinks they are right but because they enhanced the survival of individual humans and, therefore, the genes they contain. Values that directly and unambiguously do this on the local level (for humans) became hard-wired in our genes as our basic moral sense, though due to natural selection’s short-sightedness, these values tend to have a short, local range. Within that range, the survival of “my” genes (though not necessarily me as an individual) became intimately linked with possessing the core moral values, values that often place the welfare of others—at least our immediate relatives—above ourselves.
Again, evolution didn’t wire us to perceive this evolutionary dynamic directly, but as is usually the case, evolution did give us an imperfect proxy for this dynamic: Evolution gave us a powerful moral intuition that is most universal when it is most local; that is, when it is centered on our immediate families. Whether we are talking about marauding barbarians, or English butlers, all would very likely speak the same moral language when dealing with loving and caring for their immediate families—their closest genetic relatives. Of course, thanks to evolution’s short-sighted nature, the further one moves outside the immediate family, the weaker the altruistic tendencies become, and the less hard-wired are the moral intuitions.
But evolution also gave us our capacity for reason. And it is our capacity for reason that allows us to see further than natural selection can see—beyond the local and immediate—by using foresight. It not only allows us to recognize that our local, immediate (e.g., family) genetic survival is the raison detre of our core moral sentiments, but that this root “purpose” can be better served by extending many of these locally-oriented values and even suppressing others—others that are harmful to that purpose in our modern, globally interconnected world (like an almost natural suspicion and hostility toward those who look and act differently than those of our more immediate community).
This combination of our locally-oriented core values with our capacity for reason allows us to see our directly observable moral axiom—our immediate, self-evident percept of what is morally “good.” All we can do is observe that nature created within us a short-sighted set of moral instincts that do precisely what you would expect: further the survival of the “gene teams” within individual humans, who for most of their history lived in relatively isolated communities. More specifically, “my gene survival” became directly linked to “my family’s survival,” which in turn, became directly linked to “my local community’s survival.” This much, as we’ve seen, can be understood as a purely genetic effect, since natural selection operates over this local range. However, when we apply our capacity for reason (also a result of natural selection) to this, we can see that this local survival is directly tied to the survival of humanity as a whole—especially in modern times, since those isolated communities are becoming merged into a more seamless global community. So, with a little of that precious foresight that only reason allows, we can see the equivalence between “my gene survival,” and “humanity’s survival.” We can only call this “good”; and we can only explain why we call it good in these naturalistic terms. We cannot meaningfully talk about why it is good outside of this context. It can make no sense to ask whether this contingent state of human nature is good, or should be what it is—it simply is what it is, though it could have been different. What it is, however, is what we ultimately mean by “good.”
Now at this point some readers may point out that I have not addressed the problem outlined at the beginning of this section: While I’ve described how we have come to have our core moral value in terms of their survival value, I have not explained why this state of affairs is one that ought to be: Why is it good that humans want to survive even at the local level? After all, these critics might argue, who is to say that it is not morally preferable that humanity never have evolved at all. Keep in mind, however, our earlier discussion about self-evident truths, and the context-dependent nature of concepts. ”Good” and “evil” presuppose the existence of self-aware humans who have choices to make. That is the context in which the concepts of good and evil are defined. One cannot apply those concepts to the context that gives them meaning. In other words, we cannot apply those concepts to humanity as a whole, as if from the outside looking in, and assert something like “it is good that humanity has evolved,” or, “it is morally evil that humanity has evolved,” or, “it is good that the core value of humans is that they want to survive.” These statements are either unintelligible or at best analytic truths (e.g., that humanity now exists is part of the definition of good). One can only use those concepts within their context—namely, living humans talking about their values with regard to other humans and their environment. It happens to be that our natures are such that we value our family’s and community’s survival, which reason can immediately extend to the survival of humanity as a whole. This simply is, like the fact that two eyes is normal rather than ten. Recognizing this basis for morality, we can then derive objective, meaningful “oughts” in our moral system from this axiomatic “is.” This immediate percept forms the bedrock premise of our moral statements—statements which must be of the form, “if X (our bedrock premise), then I ought to do Y.” For example, in medicine, “oughts” are always implicitly understood as being part of a conditional: “If X, where X=you want to be healthy, then you ought to do Y.” Notice that given X, such “oughts” are completely objective.
In both medicine and morality, additional premises come into play in order to deal with particular situations; if any of those premises are in error, then the conclusion may be in error. For example, the “oughts” of medicine have changed as our scientific knowledge has changed, but those changes have always been guided by the application of reason in support of the unchanging root premise: “You want to be healthy.” This evolving, self-correcting nature of medicine would certainly not lead anyone to suggest that medicine is somehow subjective or relative, with no objective basis. So it is in the realm of ethics: the objective pursuit of our unchanging bedrock premise is what grounds ethics in objectivity and rationality. Other premises that are brought in to guide our moral decisions in particular situations, if based on poor data, may produce conclusions that are in error—errors that, once discovered through argument and evidence, lead us to change our conclusions. But the basis for those changes, just as in medicine, is the continuing application of reason in pursuit of the root goal. It is not subjective whim. And just as in medicine, the root, bedrock premise never changes.
Our moral “if,” that the survival of the human community is good (speaking as a human), is an immediate percept, and forms the basis for what we mean by “good.” It makes no sense to ask why that is, we can only observe that it is. It could have been different, but it happens to be what it is. Had it been different, we likely would not be here to discuss the matter. If the survival of the human community is what we mean by “good,” then with the application of another of our defining human characteristics, reason, we can quickly derive some other immediate, inalienable goods: for example, our survival depends on our wanting to survive, which in turn depends on our finding our lives worth living, and when we find our lives worth living (assuming we are not mentally diseased) we feel happy and fulfilled. As we’ve already seen, this in no way suggests pure selfishness; indeed, many feel happy and fulfilled when they are sacrificing for their loved ones.
Some might say at this point “well, what if my life is worth living only when I am killing innocent people?” But consider how we got to the statement that “it is ‘good’ that life be worth living.” We said life’s being worth living is a moral “good” only in the sense that it can be traced back to our moral axiom. If a person finds his life worth living in a manner inconsistent with that end, then we can say that the person is pathological, or evil—and we can say this objectively. Further, by applying reason to our moral axiom in combination with the detailed facts of a given situation, we can then objectively examine cultural norms, laws, customs, and even innate human tendencies, and make rational, objective arguments as to why some are evil and some are good. In principle, we can rationally determine what we ought to do in any situation from the combination of the core morals and the factual details of that situation.
I don’t mean to suggest here that such an exercise would be straightforward. In fact, I expect that the sheer number of relevant facts and the uncertainties surrounding them in most “real world” cases are often so great that consensus would be rare, and of course, this is exactly what we often see. But this is just to say that reasonable people can disagree in many cases, and that ethics is not often an easy thing. For example, a powerful rational argument can be made that overwhelming military force ought to be used in those situations where it is probable that far more lives will be lost, or human misery increased, through appeasement or a short-sighted, suicidal pacifism that often entices aggressors to violence. This argument would have been a sound reason to preemptively engage Hitler’s forces much earlier, since with hindsight we can see that the pacifistic appeasement of the early 1930’s was, ironically, the bloodier choice compared with the so-called “war-monger” strategy of “cutting out the cancer earlier rather than later when it’s much bigger.” Of course, both “doves” and “hawks” want the same things: stability and peace. What they disagree on are the probability assessments, and relevant facts of the case when arguing how best to achieve those ends. If they were equally sane, equally rational, equally well-informed of all applicable variables, and made similar probability estimates on all the unknowns, then they might always agree, due to their shared core moral axiom. But in practice, the variables are too numerous, and the uncertainties too wide, to always expect reasonable, caring people to agree.
So, in this section, we’ve seen how our moral axiom can form the self-evident core and justification of a rational, non-theistic ethical system. Its truth is not simply assumed, nor taken on faith, but directly observed. Its truth is self-evident in the same way that other contingent truths about humans are self-evidently true, and also in the sense that to deny its truth is to utter a kind of contradiction: if someone thinks human survival is not good, then he needs to explain why he hasn’t killed himself, to which the only apparent replies could be that he wishes he could but is just too afraid, or he wants to stay alive only long enough to bring about the destruction of humanity. I’ll grant that these are not logical contradictions as such, but I’d hate to think anyone would appeal to such statements as the basis for criticizing the objectivity of a naturalistic ethical system.
To put this point another way, one could argue that I’ve arbitrarily chosen to believe that human survival is “good,” but such an argument has all the intellectual force of saying that I’ve arbitrarily declared that eating arsenic is poisonous to humans. People could arbitrarily decide, for example, that the opposite is true, perhaps on the grounds that everything is subjective and that there is no objective reality. However, the objective nature of such human truths will enforce themselves such that those who are wrong will disappear, while those who are right will see their genes spread. This only happens because there really is an objective set of truths to discover. If there were no such object truths in the ethical sphere, then we would expect no global universal recognition of these truths, but instead expect to see an anarchy of truths with no “center of gravity,” no core content, which, by the way, is exactly what we do see in the case of religious “truths.”
A common criticism of naturalistic approaches is that they assume that whatever evolution “wants”—whatever is selected for— is “good.” But we’ve seen now that this is not the case under this particular approach. Earlier I used the example of our craving for fat and our delayed sense of feeling full when we eat. These are unhealthy in most industrialized societies. They persist because evolution has no foresight. Evolution is an utterly unconscious, goalless, and short-sighted mathematical process. Exactly the same can be said of game-theory. The lack of foresight in these processes is certainly directly responsible for many extinctions, though we humans have a tool to break through this limitation: reason.
Now that we have some idea of our metaethical theory we can empirically test its implications against those of God-dependant ethical systems. First, we can ask what kind of global pattern in social morals we would expect to see under a God-dependent approach. It seems that we should expect to see a close correlation between the most fundamental human values and some particular faith or conception of god(s). But we don’t. For example, more humans throughout the world and throughout history agree that it is wrong to have sex with a parent or to torture children for fun than agree on any one religion, or conception of god(s). This pattern—a zero correlation between ethical values and religious conceptions—is very telling. It would seem to falsify any one religion’s claim that their god is behind “good will” in the world. How could these theists explain the fact that these values are shared by heathens, infidels, and atheists? Would their god give such heretics a sense of right and wrong along but not an equally strong sense to follow the one “true” god(s)?
Of course, many Christians are unimpressed by this kind of empirical argument. Some Christian critics claim that their theistic view perfectly explains this global moral pattern. In effect, they are arguing that any correspondence between their values and those in other non-Christian societies is due directly to the action of the Christian God, while any differences are due to man’s “twisting” of the truth, or perhaps the action of Satan. As one writer put it, “If, as the Christian worldview teaches, all people are deserving of eternal punishment [?], God isn’t obliged to tell anybody. And since man suppresses, twists and distorts what God has ‘told’ them (e.g. general revelation via Romans 1), why should God give them more information to twist and distort?” 5 Bear in mind that this is intended to “explain” the observed pattern, but it misses the point: Why do all these societies selectively “twist and distort” in such a non-random way? In other words, there are some values they almost never twist, that are apparently universal — why? The ones that rarely get distorted are precisely those that relate to survival of the immediate family and community, including all the self-sacrifice, valuing of trust, and incest-avoidance that community survival requires. (This is not to say that people don’t betray trust, etc., only that when they do, it is almost universally despised and held as “evil”—which it objectively is under our naturalistic approach.)
The Darwinian approach, on the other hand, not only explains but predicts this specific non-random pattern; the theistic approach ignores it, and simply claims that whatever is bad is not from God, but whatever is good is from God. But this argument would fit any conceivable set of data. When nothing can conceivably count as evidence against a claim, then the claim is factually meaningless. (This is like the argument for the power of prayer: evidence for is, “I prayed and my sick sister lived despite the odds”; evidence against is not, “I prayed and my sick sister died anyway.” In that latter case, “God doesn’t always answer prayers or has His own unfathomable reasons.” Again, nothing could conceivably count as evidence against the power of prayer, so it is factually meaningless.)
We can continue this empirical approach by considering the hypothesis that the universe was created for a purpose by an intelligent and “good” designer. This hypothesis implies a number of testable predictions. One of these must be that we should find evidence of a consistent, non-random pattern of goodness in the universe, or even just in nature here on earth, and outside of the human context. What is the result of such a “test”? Dawkins puts it particularly well:
To try to force the word “good” to fit a being behind such a universe would seem to force us to empty the word of any recognizable meaning, rendering it compatible with any imaginable turn of events or facts of reality. Saying “God is good” against this empirical background, then, is to make the statement factually meaningless. Under our naturalistic approach, however, “good” is not defined outside of the human context. Asking, for example, whether or not the universe is inherently good or evil is simply unintelligible. The universe simply is. Applying moral concepts to non-human contexts is like asking whether a block of graphite is evil graphite or good graphite.
Under this Darwinian/Game Theory view of metaethics—which I call Darwinian Ethical Objectivism—we are not saying that all of our attitudinal tendencies, urges, and desires, are “good.” Indeed, they logically couldn’t all be good, since many of them are in direct conflict in a given situation. We are saying that only those that further the root axiom are good. Evolution clearly has given us an intuitive respect for that axiom, at least indirectly through its focus on the local and immediate. But this doesn’t make the statement arbitrary. Evolution is effectively blind to the “big picture” and to the longer-term—at least it was prior to its discovery of the power of reason in a self-aware being. Evolution’s near-sighted groping has come up with short-term approximations to what is best for our survival. Not too infrequently this same short-sightedness has led many a species right off the cliff of extinction, but with the power of reason our species can look ahead and intervene in evolution’s bad choices before it is too late.
Direct observation shows us that we naturally—instinctively—love our immediate relatives in a selfless way. With nearly the same instinctive force, we automatically bond to, and feel selfless sentiments toward, our immediate communities. In other words, the further we go from our immediately families, the less automatic are those feelings.
Reason shows us that these tendencies clearly directly supported our survival for most of the history of our species. But reason goes further. It shows that while evolution gave us these tendencies for our survival, evolution couldn’t see that these same feelings must be applied to the species as a whole, that the coordination of all the “tribes” is best for the survival of each of the tribes, just as the coordination of all the humans within one tribe is best for the survival of each tribe member. Unconscious, reasonless evolution gave us an almost purely local focus to our natural moral feelings. Conscious reason tells us that where species extinction is the alternative, our genes’ survival is better served by extending the local community interests to the interests of “humanity as a whole.”
As we’ve already seen, a theistic-based ethical system effectively eviscerates our ability to judge whether a deed is right or wrong, since we must first ask whether or not God did the deed or ordered it. If He did, then it is not only good, but perfectly good, and that fact is beyond our ability to judge. We simply take on faith that the Old Testament atrocities, for example, are examples of perfect goodness. In such a situation, there is no room for debate about the effects of such rules, only that those rules be obeyed—even if those rules (as in the Old Testament) involve rape, slavery, or slaughtering infants in front of their parents. Such an ethic, despite rhetoric to the contrary, is necessarily rules-based in the most blind-obedience sense. On the other hand, an ethical system based on our naturalistic model would be based on standards, not rules. All ethical rules would be open to constant, critical scrutiny to see if they still meet the standard of serving our core moral axiom. In such a naturalistic system, the moral worth of ethical rules are in direct proportion to the degree to which they can be shown to support our core moral axiom.
In no way should any of this be construed as saying that all historical crimes committed by theists in the name of God are a direct result of their theism any more than the many historical crimes of non-theists are a direct result of their non-theism. People can and do abuse any theory to justify their own evil ends. In fact, my contention is that the core moral sentiments of theists are, despite their protestations to the contrary, the same as the core moral sentiments of atheists, and they are the same due to the fact that we are members of the same human species. Moreover, a look at the history of so-called “absolute” Christian values over the last two thousand years is certainly consistent with the idea that Christian values have derived from, rather than being the source of, the values of the communities in which these churches lived. However, the explanations offered by many present-day and historical religious criminals are made in terms of—and are consistent with—the theistic metaethic we have described: blind obedience to rules, the morality of which we are not qualified to judge. The ethical debate other theists would have with these criminals would seem to boil down to the criminal saying “God wants me to do this, which I accept on faith and it’s beyond reason, and the theist opponent saying “No you’re wrong, and I accept that on faith and it’s also beyond reason.”
I certainly do not claim to have provided a “proof” of my particular brand of a naturalistic metaethic; however, I have shown that unlike its theistic alternative, it is both coherent and plausible. Further, even if the theistic alternative were also coherent and plausible, it would be less probable than my naturalistic alternative since Darwinian Ethical Objectivism accounts for the empirical evidence without resorting to untestable, ad hoc hypotheses like “That’s just the way God wanted it” or “It is beyond our ability to comprehend.”
1 I found Michael Ruse’s ideas particularly insightful in this regard. See Michael Ruse, Taking Darwin Seriously,(Buffalo: Prometheus Books, 1998).
2 I use Objectivism in the Ayn Rand sense of the word. An insight into her epistemology as it relates to this and other subjects on this site can be had in George H. Smith’s, Atheism: The Case Against God,(Amherst: Prometheus Books, 1989).
3A very thorough and brilliantly insightful discussion of gene selection and game theory in evolution can be found in Richard Dawkins’, The Selfish Gene (New York: Oxford University Press, 1989).
4 Will Durant, The Story of Philosophy, (New York: Washington Square Press, 1963), p.387.
5 Critic from vantil-applied.
6 Richard Dawkins, River Out of Eden: A Darwinian View of Life, (New York: Basic Books, 1995), pp. 132-133.