Category Archives: atheism

Was there a resurrection? (Easter special, a bit late)

The “eyewitness” evidence for the resurrection

Christian apologists such as Josh McDowell are fond of citing the “eyewitness testimony” in favor of the resurrection. This refers primarily to those accounts, given in the four Gospels, attesting to (a) the empty tomb and (b) sightings of an animate, embodied Christ post-crucifixion. First, note that neither of these accounts of are “the resurrection”; rather, they are accounts of events from which a resurrection would have to be inferred.

This in itself is not a problem. However, with the possible exception of Paul (see below), we do not even have “eyewitness testimony” to these events. Rather, we have the testimony of second (or 3rd? 4th? 5th?…) -hand sources as to what somebody else witnessed. Clearly, those apologists are confusing testimony about an eyewitness with eyewitness testimony. While the Gospel writers record that there were eyewitnesses, they are not claiming to be eyewitnesses themselves. Of course, this makes mush of the very idea of “eyewitness testimony”; on McDowell’s definition, the wispiest piece of gossip about something somebody saw could claim the same status.

The best-case scenario is this: Somebody saw something; somebody else reported it; and, following decades of oral transmission, somebody else finally wrote it down. The Gospel accounts should be handled with all the initial skepticism of any other rumor. As any game of “telephone” attests, oral transmission is notoriously error-prone. We have no idea how many times the Gospel stories were told and retold along the way to hard copy. We have no way to “vet” the witnesses to these events, nor all the intermediary reporters, as credible sources; we cannot even identify them, nor say how many they were.

* * *

Of course, some rumors turn out to be true. Certain factors can weigh against our initial skepticism. Some are investigated below.

Variation among the Gospel accounts

If the Gospel accounts were uniform, this would be a mark in favor if authenticity. But they wildly differ from one another. Consider the four versions of the discovery of the empty tomb:

  • Matthew: Mary Magdalene and Mary, mother of James, arrive at the tomb before dawn; an angel descends in their presence, rolls back the stone and sits on it; the women leave “with joy” to tell others, meeting Jesus along the way.
  • Mark: “The two Marys” and Salome arrive at sunrise to find the stone already rolled back; they enter the tomb to find “a young man”; they leave in fear, telling no one; they don’t meet Jesus.
  • Luke: The two Marys, Joanna, and “other women” arrive just after sunrise to find the stone already rolled back; they enter the tomb to find “two men” inside; the women leave to tell others; they don’t meet Jesus.
  • John: Mary Magdalene arrives alone while it is still dark outside; she finds the stone already rolled away; she does not enter the tomb, nor finds “men” or angels; she leaves to tell others, and returns to the site with them; only then does she enter the tomb to find “two angels”; before leaving the site, she meets Jesus, whom she mistakes for the gardener.[1]

Prima facie doubts about the witnesses

Nor are our witnesses especially credible on the surface. They are not disinterested observers but devotees and close companions of Christ. They believed, and claimed publicly, that Christ would rise again. We would no more simply “take their word” for this outcome in which they were deeply invested than we would of a spouse whose murder-suspect partner was ‘home with me all night.’

This needn’t imply deceit on part of the witnesses. Studies on eyewitness forensics find that perception, memory and reporting can be skewed by one’s values and motivations. There is a cognitive-dissonant tendency to see what one wishes to see, and this only intensified by religious fervor. This could not have been less acute before the advent of philosophical skepticism and modern science.

Can the accounts be independently confirmed?

Under these circumstances, it is important that we match the accounts of Christ’s followers with independent confirmation. This, too, would count against a skeptical view. But this too is lacking.

The only Jewish source that seems to confirm the resurrection is a brief passage in the Antiquities of Flavius Josephus. Almost every Biblical scholar believes this to be a fraudulent insert by a medieval Christian transcriber. The passage breaks with the narrative stylisitically and thematically—almost comically so. Suddenly, a staunchly Jewish writer declares that Christ rose from the dead and so must be more than “a man”—and then goes back to being Jewish for the duration of his career. Early Christian thinkers who cited Josephus never drew from this passage; no Christians claimed him as one of their own; no Jews considered him a renegade or apostate. This has to be because they weren’t aware of the passage; it came later.

Other Jewish sources such as the Talmud, though sometimes cited as evidence for the resurrection, merely report existing Christian belief without endorsing it. Indeed, these reports always present the “resurrection” as a product of fraud or misperception.

Finally, some pagan-Roman sources (e.g., Tacitus) do mention Jesus, but for slightly more complicated reasons, these are also dubious. I could make the case, but at any rate these sources mention nothing abort the resurrection nor anything miraculous, so they could not be considered confirmation of the events in question even if authentic.

The special case of Paul

The Apostle Paul mentions other sightings of a risen Christ. He report Jesus appeared to Cephas, “the twelve,” “more than five hundred brethren,” the disciples, and to James. But this record shares all the problems of the Gospel accounts: We do not know from whom Paul received his information, nor how many “links” he stood removed from the original accounts. We have no way to assess the credibility of anyone in the network.

All that aside, Paul is significant for being the only eyewitness to the resurrected Christ to record the experience him/herself. “Last of all,” he continues his list of witnesses, “to one untimely born, he appeared to me.”

One problem is that there is nothing in the description to distinguish his experience from an hallucination. There is no indication that anyone else on the road to Damascus shared the experience. Moreover, assuming Paul experienced something, what makes his interpretation of that something correct? How did he know it was Christ he encountered? (The Gospel witnesses at least had the advantage of knowing the man in life.) Maybe the entity claimed to be Christ, but an entity that can make claims can also make false claims.

It is not even clear that Paul experienced an embodied being. (This goes for the others in his list as well.) Paul doesn’t say, but in the book of Acts, Luke reports that Paul experienced a light and voice only. But “resurrection” suggests some kind of bodily existence; thus, even if Paul’s experience was 100% authentic, it is still not clear that it was of a resurrected anything.

In general, nothing in Paul’s account confirms the resurrection story as it is reported in the Gospels. It does not provide independent confirmation of those accounts, so much as it provides an entirely new account requiring its own independent confirmation (with the added wrinkle that its lack of information makes it unclear what would count as confirmation).

Finally, Paul’s credibility is open to question. His own account of his life in Rome differs sharply from Luke’s rendering in Acts. Granted, we don’t know which account (if either) is accurate; but the multiplicity itself makes it hard to accept either author at face value.

Other considerations

Speaking of Paul: Did a historical Christ even exist?

Paul’s failure to mention anything about the empty tomb or the sightings brings up a related but broader issue—one potentially far more damaging to the resurrection claim.

Here we draw upon an argument made famous by “Christ myth” theorist G. A. Wells: Oddly, Paul seems unaware of any details of the life of Christ. He is silent about his birthplace and the entire “Christmas story,” his parents, miracles, trial before Pilate and other elements of the Passion drama, and even his ethical teachings. He mentions the last supper, crucifixion and the resurrection, but in very abstract terms, sanitized of all historical and geographic setting. Paul gives no sense of being a contemporary of Christ’s, or that Christ had died only a few years before. (Paul would have begun preaching about 12 years after Christ’s death.)

Surely this is not a coincidence or stylistic choice. Paul wrote to settle various theological disputes afflicting the early church. You might say he was desperate to lay these to rest. On many topics (e.g., celibacy; whether Gentiles should keep the Jewish law) it would have been to his advantage to quote Christ’s own authoritative words on these matters—yet he never does. (It might be argued, if Christ’s teaching was a matter of record, it is odd that these disputes emerged in the first place.) This suggests he was not aware of Christ’s views on these subjects. Worse, on topics like baptism, ministering to the Gentiles, and paying taxes, Paul’s teaching appears to contradict Christ’s. Despite the Gospels’ claim that Christ performed miracles, Paul suggests that Christ lived an obscure life and was unaware of failed to exercise his own supernatural powers and mission until after he died.

Paul is not the only one with this blind spot. It is shared by the first several post-Pauline epistle writers. The disinterest in Christ’s historicity maintains all the way up to Timothy 1, whose author suddenly introduces biographical elements such as we know from the Gospels. This new style persists through the remaining epistle writers and of course the Gospels themselves (which, contrary to the Biblical ordering, came after Paul). Keep in mind the epistle authors wrote mainly independently of each other; yet the pattern, and shift, is evident.

Wells concludes:

Since, then, the later epistles do give biographical references to Jesus, it cannot be argued that epistle writers generally were disinterested in his biography, and it becomes necessary to explain why only the earlier ones (and not only Paul) give the historical Jesus such short shrift. The change in the manner of referring to him after A.D. 90 becomes intelligible if we accept his early life in the let-century Palestine was invented in the late 1st century. But it remains very puzzling if we take his existence for historical fact.

If Christ did not exist, then, he could not have been resurrected. Short of this “extreme” conclusion: If the Gospel stories entered the Christian tradition as late as A.D. 90, it lends them less credibility than if they had begun soon after the events they describe took place.

The problem of miracles in general

The resurrection’s status as a miracle carries its own problems. For Christians, Christ returned to life by God’s direct intervention. This is usually understood to be a “supernatural” event. First, I’m not sure if any sense can be given to the term “supernatural; if nature is simply all there is, what could be “super-” that? Nor am I sure any sense can be given to “God” as most Christians intend that term. The divine attributes, traditionally understood, logically contradict one another; they could no more inhere in the same being than a “square circle” could exist.[2]

But let us for the sake of argument accept that a miracle is a fortuitous, God-directed violation of natural law.

This definition raises the question of how to tell whether a violation is real or merely apparent. First, as biologists have ably argued against the creationists, the fact that a phenomenon remains unexplained by “natural” causes hardly means that it is unexplainable. The whole history of the modern age is one of filling in those “gaps” in our understanding as new research unfolds; to simply assume that this puzzling event will never benefit from the same grand trend, so it must be supernatural, is presumptuous. A person could “naturally” come back to life, if very rarely, under laws we haven’t discovered, or whose precise working remains unknown. As hard as this may be to accept, it is unclear how a “supernatural” act comes easier to swallow.

Second, the exercise of deceit, fraud and (again) sincere misconception can account “naturally” for the appearance of a violation where none has occurred.

The question is not whether a miracle could ever happen, or be known to happen. It is whether it is more reasonable to attribute stories of an empty tomb and “risen man” to a suspension of the laws of nature rather than to the sorts of causes by which we explain the great many other strange and unusual events we experience. At any rate, the evidence in favor of a miraculous cause would have to be exceedingly strong to overcome the statistical presumption of natural causality; and, as we have seen, this evidence is, rather, particularly weak and tenuous. The question is, then, whether it is more reasonable to leap to supernaturalism on the basis of very old, conflicting rumors of completely unknown origins.

Two more Christian arguments

There are two counter-arguments which jointly attack the idea that the resurrection could be a product of fraud or delusion:

(a) Why would the disciples agree to “die for a lie”?

As a young Christian, I heard stories of early believers hunted, crucified, and fed to lions, refusing to renounce their faith to save themselves. At times, this was used as evidence for the resurrection: If Christ had failed to rise again, it would have led his followers to abandon their belief in him. Even if they faked his resurrection or otherwise lied about it (to avoid embarrassment, perhaps) they would not be willing to sacrifice their very lives for the ruse.

First, I really don’t know the degree to which these Christians would have been able to get out of trouble by simply renouncing their beliefs. Maybe some of these martyrs didn’t have a choice in it. Perhaps their “thought crimes” were so bad that renouncing them failed to saved them. But let us assume (some of) the Christians in question had the power to avoid that end, and didn’t exercise it.

Still, the argument fails in assuming that any “lie” would have to have been perpetuated by the same group that “died for it.” This doesn’t follow. The disciples could have been the victims of the lie rather than its source. Granted, as we have noted, they had a strong motive to show that Christ had risen. But others may have had motives of their own to perpetrate a hoax—to discredit their rivals as rubes and dupes, just for kicks, or for any other reason you like. The point is that people do this sort of thing all the time.

Worse, there is no evidence that the originators of the stories in question—those ”eyewitnesses,” if you like—actually died, willing or not, for their faith. We know that some early Christians were martyred—but these are not the early Christians we have been discussing! If any martyrs who were not eyewitnesses “died for a lie,” it was one which they didn’t know was a lie. That is, they were willing to die for a sincere religious belief. Surely that is not impossible to swallow; the phenomenon is well documented. Plenty of Christians who never claimed to witness anything miraculous also chose to die for their faith.

(b) There is no such thing as a “collective hallucination”

The empty tomb and risen Christ are supposed to have been witnessed by groups rather than individuals. Some argue that a delusion such as a hallucination is not a “collective” or “contagious” phenomenon. It is not like a movie which many can watch at once; the reel is “in the head,” accessible only to its owner. Sharing the same delusion would be like sharing the same dream. Therefore, whatever experience these groups shared was not an hallucination but an objective, external, “real” event.

The psychology behind this objection is not quite correct. Like other psychological phenomena, hallucinations can emerge when a triggering event occurs under the right set of initial conditions. And there is simply no reason why one person’s hallucination (or his behavior while having one) could not itself trigger a second hallucination in a second person (and so on).

Indeed, the “mass hysteria” is well documented in the psychiatric literature and often takes a religious coloration. One sufferer manifests words or other behaviors which indicate to witnesses what the delusion is ‘of.’ This acts as a trigger for others who are present under the same initial conditions. This almost certainly happened during the Salem, Mass., witch scares: In a strong climate of stress and fear over demons and their earthly servants, one witness might imagine she smelled sulphur in a house, a sign of the devil’s presence; and her declaring, “There is the smell of sulfur so strong here! It is Beelzebub!,” while screaming and writhing on the floor, etc., triggered others around her to have a similar experience.

Christian detractors may be falling prey to an ambiguity in the phrase “the same.” These sufferers are not quite enduring “the same” delusion, so much as two different ones of the same type and relating to one another in a kind of causal cascade. In this way, when my wife tells me of some injustice she has just suffered at work, her story drives me to share her anger—that is, it triggers “the same” anger in me. Nobody would doubt this experience just because anger has a “subjective” and individualized quality.

* * *

Note these counter-arguments are not exhaustive. They treat fraud and delusion but not other hypotheses—for example, that the resurrection stories could be the product of “noise” picked up from decades of oral transmission, or a sincere misconception short of psychosis. Each of these is equally or more probable than a supernatural explanation. Thus, even if the counter-arguments were correct, they would not prove a resurrection occurred.

[I am going to post a follow-up hopefully soon. This is what I have for now.]

Notes

[1] The four versions of the sightings of the “risen Christ” are no more coherent. If anyone wants, I’ll break those down too.

[2] For instance, God is said to have perfect foreknowledge of the future, as well as freedom of action. But these traits conflict: If God knows he is going to do x in the future, he is not free to do y; if he were still free to do y, his belief that he will do x would be false.

Advertisements

Expelled redux: Darwinism leads to atheism?

[Following up on previous posts here and here.]

Perhaps the most specious claim in Ben Stein’s creationism polemic Expelled: No Intelligence Allowed is that Darwinism leads to atheism.

While certain atheists may claim inspiration from Darwinism, there is no reason to take their word for the accuracy of this connection.

In truth, evolution simply removes one argument for theism—the argument that speciation and variety in nature can only be explained by appeal to god. (Mono)theists got along without this argument for nearly the whole life of monotheism. Nor do all theists make use of it now. The moral philosopher Immanuel Kant, to whom contemporary Christians are arguably indebted for their ethics, denied the whole of such “natural theology.”

By analogy: The fact that ‘god didn’t cause speciation’ is like the fact that ‘my best friend didn’t cause the coffee stain in my carpet.’ My friend didn’t spill the coffee—nor did he murder Jimmy Hoffa, nor has he done the great majority of things which are done every day in the world. But this hardly means he doesn’t exist. When I offer an alternate explanation for the coffee stain—the cat did it, or I did it—it has no bearing on my belief in my friend’s existence; there can be spilled coffee with or without my friend. In this sense, there can be natural speciation with or without god.

My point is not just that replacing a ‘godly’ mechanism with a natural one leaves any other arguments for god untouched. It is that there really should be more arguments. If Darwin’s single omission invites atheism, the case for theism was pretty weak to begin with. Similarly, if my marriage is over simply because my wife forgets how to cook pasta, the argument for us staying together (i.e., her pasta skills) was poor anyhow.

Why god cannot be the ground of morals

[I admit there is nothing new below. Kai Neilsen made the same basic moves before me. But I think the synopsis is helpful. Also, I refer to god as ‘he’ because I’m a dumbass and a flake and can’t think of an elegant, non-sexist rendering.]

tencommandmentsNope.

A common objection to atheism is that, by removing god, one removes the “grounds” for morality. Without a divine legislator, humans are free to do and think whatever they want. Below, I attempt to show not only that (a) god is not needed to ground morals, but (b) he cannot be a ground for them.

God and morals: The wrong kind of atheist response

Atheists have been quick to respond that they are, in fact, no less moral than anyone else. And while I am loathe to equate “imprisoned” with “immoral,” I expect many theists do, so it may be relevant that the prison population boasts about the same percentage of declared atheists as the broader population.

Chris Hitchens gives an interesting twist on this defense. In The Portable Atheist, he issues a challenge to believers: “Name me an ethical statement made or an action performed by a believer that could not have been made or performed by a non-believer.”

Hitchens’ point is that there is no logical reason why atheists could not behave just as theists. Assuming this is true, however, it probably doesn’t answer the theist’s main concerns: Supposing there is a divine legislator whose will provides the reason (and the only reason) for ethical behavior. Sure, atheists could make any statements and perform any actions they want all day long—just as theists could say and do unethical things, in spite of god’s commands. The point is that they could only do so inconsistently.[1]

The believer argues that, without a belief in god, there is no rational grounds for behaving morally; an atheist can do it, but he cannot justify it.[2]

Fleshing out the theist challenge

The heart of the theist’s worry is over arbitrariness in morals. Unless morals are imported from outside humanity, each human is left free to define the good for himself, or to dispose of moral categories altogether. To be able to say that some choices are wrong requires a collective “touchstone” or measure against which the choices can be evaluated. We can still argue about what is good or bad; but as C.S. Lewis pointed out, that we can argue at all presupposes there is some real standard, independent of the arguers, to be argued over.

But how does god solve this problem? For he is in precisely the same existential position as we, it seems. He has no “touchstone” outside himself. His choices are just as arbitrary. We would do no worse to elect one of our species to legislate—my uncle Ron, perhaps; his communications would be more direct, no doubt, and would avoid messy controversies about the legislator’s existence. If there are problems with “Lord Ron,” invoking god does not so much solve them as push them back one level.

Believers would respond—they would have to—that god is more than a useful place-holder, an expedient “tie-breaker” in moral disputes. He does not so much ‘pick’ the good as he (including his will) just is good. Only if we assume god is perfectly good—good completely and at all times—can we know that whatever actions he wills for us are good.

Two options for theists

Option 1: Morals by divine command

Still, how god’s will relates to the good is unclear. To paraphrase Plato’s Euthyphro dialogue: Is it good because god wills is, or does he will it because it is good?

On the one hand, we could simply define “good” as “whatever god wills.” Then we know for certain that god’s commands are good because “good” simply means “whatever god commands.” But this just thrusts us back upon the King Ron problem. It doesn’t solve the problem of arbitrariness in morals. God might have willed us to eat babies (he could change his mind and will this tomorrow) and it could not but be “good” to do so.

Some theists bite the bullet and accept this unattractive consequence—at least when asked. But it is unlikely they could ever embrace it in practice. Defining “good” as “what god wills” makes nonsense of much of what believers want to say about morality.

They can never, for example, speak of god’s having good (or any other) reasons for what he wills. It cannot be that in his wisdom god chooses this because it is good. This would imply he is referencing some standard of the good outside himself, which determines his choices—when we have already defined him as the sole standard.

Not to mention most theists (traditional Christians, for one) would be put off by the implication that an ‘almighty’ god should be “determined” or “limited” by some force outside himself in this way.

Likewise, every believer is taught that she always ought to do what god wills. (Certainly, the Bible demands this.) But this too becomes impossible. It invokes the same “outside standard” our definitions have forsworn. When “what we ought to do” and “what god wills” have the same meaning, the most one can say is “I ought to do what I ought to do,” or “god wills what god wills.” This bleeds our moral imperatives of substance, rendering them vacuously tautological.

Option 2: God as “substantially” but not logically good

Still, believers need to say that god’s commands are always good if they are to retain the idea that god is the ground of morals. They just need some non-tautological way of doing so. In short: We need to say that what god wills is always, necessarily good, but it is not good by definition. For lack of a better analogy, think of a dog which is not logically brown—dogs can be all kinds of colors, and this dog could have been another—but is still, in fact, always brown.

But this brings up a new epistemological quandary: How could we know god is perfectly good? There was no problem knowing he was good when we were simply defining him that way; we knew this in the same way we know unmarried males are always bachelors, or dogs (of whatever color) are always canines.

One could claim to know god is good because of direct acquaintance with him or his works. His goodness is known in the in the same way we know our a friend’s penchant for jokes or cooking style.[3]

But to know whether god is good requires a prior and independent understanding of the concept of goodness. We have to know what “good” means before we can look and see if god and his works exhibit this quality. His goodness is not just a cognition but a re-cognition.

This explodes the theist’s argument: If we can know what is good before knowing god, this knowledge cannot depend upon him. We cannot “get our morals from god” because we need the morals to know whether he is in any position to give them out.

Conclusion

While the “grounding” of morals is an interesting problem, it is a problem for both theists and atheists alike. Postulating divinity doesn’t get us closer to solving it. Even if there were a god, we would have to get a sense of right and wrong from a source outside him. Thus, the fact that we do possess a sense of right and wrong cannot be an argument for the existence of god.

Notes

[1] Hitchens’ approach exemplifies the problems I have with most of today’s “leading lights” (“brights”?) of atheism. They speak as though the only bad consequences theism could produce are observable: Religion fuels violence, divides people, slows progress on stem cell research, etc. This prejudice leads him to cast the ethical aspect of the debate as one of how we get people to do right stuff.

[2] The theists have a point: Living an authentically moral life requires more than the right behaviors. It requires having the right reasons behind them. Having good reasons is what makes moral actions moral in the first place: If I trip and accidentally cushion a baby’s fall from a balcony, I am hardly to be praised for his rescue. Indeed, I could have been on my way to throw a different baby off a different balcony; maybe I only tripped because I was wearing socks to better sneak into the apartment. Even if I meant to save the baby, but only in a crass bid for media exposure, it ceases to be a moral act. If, however, I place myself in the baby’s way deliberately, I’m a hero, and deserve all the praise I get. The behavior is the same; only the reasons distinguish them.

[3] There are other problems with knowing god’s goodness that exceed the scope of this piece. Note, god is not just good but perfectly good. It is unclear how one could be “acquainted” with any “perfect” quality. Maybe we can know god has been good up to now, but “perfect goodness” projects this quality of goodness into an infinite future—while our data is ever limited to the past and present. But this for another post.

All wrong: A review of Ben Stein’s Expelled: Part II

Is the Academy enforcing thought control against Intelligent Design?

Part I of my review covered Expelled’s specious attempt to link Darwinism to nasty “social engineering” projects like eugenics and the Holocaust. Here, I discuss the film’s second major theme: Alleged “thought persecution” of pro-Intelligent Design professors by the academic establishment.

ben-stein-fake-ass-rebelBen Stein: Fake-ass rebel

Negative “freedom”: A bogus virtue

A valorization of ‘negative freedom’—defined as the absence of external constraint—frames the whole film. Stein begins his narrative, “Freedom is what makes this country great….But imagine if these freedoms were taken away.” Well, he doesn’t have to imagine: American professors, he argues, are being punished for sympathizing with Intelligent Design (ID) theory.

The “taken away” line, consequently, is accompanied by a montage of young black Civil Rights marchers brutalized by police dogs and hoses. Stein must intend that the “freedom” fought for by these marchers is the same object now being denied his academics.

This seems a stretch precisely because it is. In truth, there is no generic “freedom” to guide us in morals or social policy. There are only specific freedoms to do specific things. And just as our commitment to “food” doesn’t commit us to favoring every nutritive substance on earth with equal vigor, it is possible to embrace “freedom” without being equally committed to, or worried over, every freedom to do every thing. (Indeed, as every freedom is mutually incompatible with some others, we can’t be equally committed to all.)

If by “we should secure freedom,” Stein really means, “we should secure freedom to teach Creationism in the classroom,” fair enough—and make the case on the merits of the thing. But let us not pretend that a commitment to “freedom” automatically spells a commitment to this freedom.

Academic freedom, no less

Stein is shocked that “scientists” should be less than “free to ask any question, to pursue any line of inquiry without fear of reprisal.” But a strict absence of constraint in the classroom has never existed, much less “made America great.” Nor should it exist. (Nor is Stein really, in his heart of hearts, agitating for any such thing.) It would amount to nothing less than the abandonment of curricular standards.[1]

“Persecuted” professors?

Stein’s general approach

Regardless, Stein’s examples of ID sympathizers persecuted by the academy are so exceedingly weak that we can assume, if this is the best he has, the issue is effectively nonexistent.

Stein’s entire case rests on the testimony of five science professionals profiled in the first fifteen minutes of the film. I dare anyone to watch to that point, parse the narration carefully, and tell me precisely where Stein demonstrates how, in his words, “ID is being repressed in a systematic and ruthless fashion” by the academic establishment.

To establish the victim-hood of his subjects, Stein employs the following suspect tactics:

(1) “Suggesting” causal connections without evidence: Stein describes how a subject (A) made public a commitment to ID, and then (B) suffered some loss of position. “A happened; later, B happened.” Of course, this gives the impression that the two events are actually connected in some way; but if you watch carefully, you’ll see Stein never makes the case. He doesn’t even try.

(2) Overlooking possible causes other than ID loyalties: Often Stein’s “victims” violated some university or professional policies which could just as likely explain their “discriminatory” treatment.

(3) Confusing “fired” with “expired”: In three cases, the so-called “expulsion” of Stein’s subjects coincide with the predetermined end of their contract period.

The charges which don’t fall under these categories are “offenses” which, even if true, are simply not serious. Nor is there evidence they had anything to do with the victims’ ID commitments.

Finally, all of the accounts are purely anecdotal; nothing any “victim” claims is corroborated by other first-hand accounts. (Indeed, where others involved give their versions, they always contradict, and outnumber, Stein’s subjects.)

Stein’s profiled “victims”: A case-by-case analysis

(a) Richard Sternberg

This appears to be Stein’s “flagship” case. According to Expelled, Dr. Richard Sternberg, while managing editor of a biology journal, decided to publish a colleague’s paper “suggest[ing] intelligent design might be able to explain how life began.” At the time, Sternberg also held an “office” at the Smithsonian.

The paper, Stein recounts, “ignited a firestorm of controversy…[Sternberg’s] political and religious beliefs were investigated and he was pressured to resign.” Sternberg adds that the department chair (and other unspecified “people”) said bad things about his decision. But at this point, the worst we have is “pressure…to resign.” And Sternberg didn’t resign from anything. This hardly rates the imposing, red-inked “Expelled!” stamped across Sternberg’s face with a thud—a recurring graphic motif in the film.

So what did happen?

First, Sternberg could not have “resigned” his editorship on account of the article, as it appeared in the issue he’d already scheduled to be his last. In a subsequent issue, the journal’s publisher ran a retraction of the article. This was not for its ID-themed content, but because it violated their own (and standard) peer-review protocol: Sternberg claimed the paper had been reviewed by “four well-qualified biologists,” but refused to name them (and never has); he also failed to mention that he was one of them. The entire process was done behind the backs of the other editorial staff. This is all highly unorthodox and violates the practice and express rules of the journal.[2]

Neither could Sternberg have “resigned” from his job at the Smithsonian, because he didn’t have one. He was an unpaid researcher there under the rubric of another institution. He did, as he claims, “los[e] his office,” but this was not because of the paper, or ID, but because his set term as researcher was up. Right after, he was offered another research position at the same institution. Sternberg’s own email records document his supervisors’ opposition to any sanction of Sternberg for his ID sympathies.

This hardly describes the “exile” the professor claims to have suffered.

(Note too that the journal is a tiny regional paper with a circulation mostly internal to its publishing council. Whatever happened to its editor would hardly implicate “the academy.” )

(b) Caroline Crocker

Caroline Crocker was a biology professor at George Mason University. Stein begins, “After simply mentioning intelligent design in her cell biology class…her promising academic career came to an abrupt end.”

Note the correlation without causality: Crocker mentioned ID here; she lost her job there. But the link between the two events, if any, remains unshown.

Crocker’s “lost…job” amounts to the university’s failure to renew a contract that ended at a set time. This is not at all unusual—especially for part-time faculty, as was Crocker—and implies nothing particularly sinister. The university claims the decision had nothing to do with ID, and there is nothing but Crocker’s “feelings” to say otherwise.

(And Crocker did much more than “simply mention” ID. She taught the damn thing. But more on this below.)

(c) Michael Egnor

Stein narrates: “When neurosurgeon Dr. Michael Egnor wrote an essay…saying doctors didn’t need to study evolution in order to practice medicine, the Darwinists were quick to try and exterminate this new threat.”

So what did this sinister effort look like? In Egnor’s own words, “A lot of people on a lot of blogs called me unprintable names.”

This is the entire charge. At most, some of these bloggers encouraged their readers to call the university and ask for Egnor’s resignation. (We are not told whether any of them ever did.) But Egnor wasn’t fired or driven out from anywhere. His name-callers weren’t associated with any university or professional administration. For this, Egnor is the most dubious recipient of Stein’s thunderous “Expelled!” stamp across the forehead.

(d) Robert J. Marks II

Dr. Marks, an engineering professor at Baylor University, erected a website on the university server to solicit grant moneys for private research. The site explored ID theories. Marks’s entire complaint in Expelled is that the university asked him to add a disclaimer—the same type that introduces every infomercial—clarifying that Marks’ personal views may not represent those of the university. This reflects university policy, which, if anything, appears to have been bent in Marks’ favor to let him keep the site. Instead, Marks chose to export it to another server, where it remains.

Again, this is the entire complaint. Marks is still at Baylor university and continues to receive a river of grant monies totaling in the millions. (And again with the “Expelled!” logo. Jesus. A disclaimer on a website is “Expelled”??)

(e) Guillermo Gonzalez

Finally, Stein profiles astronomer Guillermo Gonzalez of Iowa State University. After publishing the ID-sympathetic book, “The Privileged Planet,” Gonzalez’s petition for tenure was turned down.

Once more, we are presented with two events but no evidence—no attempt, even—to show how they might be connected. Gonzalez himself can only speculate: “I have little doubt that I would have tenure now if I hadn’t done any professional work on intelligent design.” (Well.)

The Chronicle of Higher Education notes Gonzalez, by the time of his tenure denial, “had no major grants during his seven years at ISU, had published no significant research during that time and had only one graduate student finish a dissertation.” A Physics Dept. colleague of Gonzalez reported his work leading up to the denial conspicuously lacked any math, measurements, or tests.[3]

Conclusion

Expelled was Stein’s big chance to slam the intellectual establishment. With two years and a ton of resources (in his words, it is “possibly the most expensive documentary for its length ever made”), he produces this anecdotal piece of shit. It is as if the Klan produced a documentary to prove once and for all the validity of white supremacy and all they present is a couple people saying black guys cut them off in traffic. I’d love to hear from people who find this convincing. I just don’t get it. Ben Stein has always sucked, but he’s better than this.

Two concluding points:

(1) Again, nobody in the documentary was fired, or otherwise sanctioned, for teaching ID. But what would be wrong if they were? As noted, “academic freedom” per se is simply crazy, and even the people who invoke the value don’t believe it in a strict sense. Stein himself gives the caveat that we wouldn’t want teachers to push Holocaust denial or flat-earthism in the classroom. So he must want limits to this “freedom.” But he never specifies what they should look like.

So why couldn’t ID in principle be relevant to one’s claim to lead a classroom or edit a journal? If the theory is plainly, grossly wrongheaded—crudely put, if it’s a damned stupid thing to believe—why should its endorsement not be a sign of scientific incompetence? I mean, fine, argue that it isn’t stupid; but stop acting as though nothing a professor believes could be ever relevant to his tenure.

On the other hand, ID could be a serious liability to scientific performance. It is classic god-of-the-gaps. And if you stop at the next gap, the next unknown phenomenon, and just assume it is designed, you stop looking for a genuine explanation. And the whole history of science—even the science the ID folks accept—is nothing if not the history of naturalistically filling gaps which looked at first to be designed.

(2) The most annoying part of the project is its faux rebellious air. Stein snarkily reports that his subjects “questioned the powers that be” and are now paying the price for it. This is accompanied by montages of the old Soviet Union building walls and showing force against “dissidents.” Of course the academy is supposed to be the brutish, conservative Regime and Stein and the ID guys are the lone rebels. This imagery is part of Stein’s cloying effort to hippen or “MTV”-up the film.

But rebellion in itself is nothing to celebrate. The NAMBLA pederasts’ website is full of challenges to a rigid orthodoxy. Every purveyor of every vile or idiotic thing is almost by definition a convention-flouter. You don’t get to be a cool rebel just because you believe crazy shit.

* * *

Notes

[1] By “scientists,” Stein refers to academics who are also scientists. Granted, the (alleged) persecutions are not strictly for things said, as I wrote, “in the classroom.” Some teachers have been targeted (again, allegedly) for things they wrote in academic journals. But my comments stand: Not only is it unreasonable to expect total freedom in the classroom, it is unreasonable to expect that you can exercise total freedom in your published work and it not affect your claim to a classroom. I’m sure Stein has no objection to teachers’ receiving jobs, or tenure, on the basis of published works. Everyone sees these as factors relevant to one’s teaching status. This is why every published professor with a website lists a C.V. But this relevance works in two directions.

[2] Nor was Sternberg, among those associated with the journal, nearly the most qualified to review the article. The article covered Cambrian-era invertebrates, on which many of the publishing Council are experts. (Sternberg is a taxonomist with no paleontological background.)

The article grew out of a meeting between Sternberg and the author (Stephen C. Meyer)—not the other way around. There is some reason to think they planned it as a “lame duck” parting shot which they knew would never fly under normal circumstances, and which Sternberg would likely be sanctioned if he weren’t already leaving. Meyer offered no new scholarship that would normally occasion publication, but cobbled together parts of papers he’d already published.

[3] The film also mentions “petitions” circulated by an “Avalos,” hinting that it sealed Gonzalez’ fate. Hector Avalos is a professor of Religious Studies at ISU. He co-wrote a general statement (not a “petition” for anything) against ID explanations which was signed by 130 other faculty. It wasn’t a policy document, nor did it result in any policy change. It predated Gonzalez’s tenure bid by two years. Nor did it name Gonzalez or any specific person.

All wrong: A thematic review of Ben Stein’s “Expelled: No Intelligence Allowed,” part I

Part I: Darwin and the science of “social engineering”

This documentary film bears two central themes: First, as the title suggests, it charges that American professors are being punished for sympathizing openly with Intelligent Design theory (ID), a spruced up version of Creationism. Second, it wants to place some blame upon Darwinism for inspiring ugly historical movements like eugenics and the Holocaust. This critique is likewise placed within the context of an extended pro-ID argument.

This part of my review deals with the second theme.

ben-stein-airy-suggestibilistBen Stein: Airy suggestibilist

Wrong from the start: Stein’s fatal error

Eugenics is the effort to “fitten up” the human species by breeding the strongest members, while sterilizing or otherwise preventing reproduction by those deemed weaker. The Holocaust, of course, aimed at the same effect by killing off the weaker ones directly. (I refer to these efforts collectively as “social engineering.”)

Stein not at all specific as to why these sins should be laid at Darwin’s door—but more on that later. For now, one obvious problem with any attempt by ID advocates to blame Darwinism for “social engineering” is that the part of Darwinism that (allegedly) licenses these acts is the very part which ID theorists themselves accept.

Proponents of ID, including those interviewed by Stein (e.g. William Dembski), are fond of saying that Darwin’s mistake was misapplication. He started with a sound idea—natural selection—but simply tried to explain too much with it. All ID theorists accept that natural selection explains “microevolution,” or changes within the same species over time. But they deny that it explains the differences between species; that is, they disagree with “macroevolution” whereby one species evolves into a completely different one. It is at this line of speciation that ID folks part ways with Darwin.

Fair enough, let’s assume. But it is not any particular application of natural selection that inspires the “social engineers”—but rather, natural selection plain and simple. If Darwin’s theory, as Expelled contends, contains the “seeds of horror,” then so does the very ID theory the film endorses. I see this inconsistency as nothing less than fatal for the entire project.[1]

What precisely is the claim here?

But let us dig deeper.

Again, Stein never bothers to say just how Darwin is supposed to be connected to Nazism and eugenics. The film throws up a lot of insinuations and “feelings” on the matter, none of which are really explored.

For a few of the documentary’s claims:

* The eugenicists and Nazi architects were “inspired” by natural selection. But so what? Sometimes inspiration is “taken” rather than “given.” Thirty years ago, John Hinckley, Jr. was “inspired by” Jody Foster to shoot President Reagan. Foster was the inspiration, but her influence was completely passive, and completely excusable.

* Many leading Nazis, and almost all eugenicists, “were fanatical Darwinists.” Maybe, but they were mammals and Westerners and a lot of other things, too. Most all of them had noses,  I expect. Correlation is not causality.

* “The Nazis relied on…Darwin.” This is more specific but fares no better. For they also “relied on” bacteriology and neurology. A ton of real science was involved in the torture of and experimentation upon the “unfit.” (Hell, they relied on mundane things like trucks, wool and tinned meats, too.)

Of course, what Stein really means to say is that the Nazis claimed to be relying on Darwin. As such,

* Mein Kampf cites a “correspondence between” Darwinist ideas and Nazi ideas. But even if some ugly movements claimed to be Darwinist, it doesn’t follow that Darwinism is to blame for them. It is not enough that someone claim to be inspired by a belief to perform some horrible act; in order to blame the theory, it must be shown that it has not been misinterpreted. It must be shown that the perpetrators were correct to draw the inference.

* Darwinism is “a necessary condition” for the rise of Nazism (though not a sufficient one). This is more specific yet. But just as correlation is not causality, causality is not culpability. Perfectly benign entities such as oxygen, gravity, the nation-state, British and American citizens, and Jews, are “necessary conditions” for Nazism as we knew it. Of course, that doesn’t make those things bad. Hitler’s grandmother could have been a lovely human being, but she was a “necessary condition” for Hitler(ism).

* * *

So nothing like an argument is to be found above. The film leaves it unclear what the “engineers” thought was Darwinist about what they were doing, and what was in fact Darwinist about it. (This is compounded by the fact that when actual “social engineers” speak for themselves, they aren’t much clearer.) But perhaps we can stitch something together.

Helping Stein along

The breeding of human beings was an attempt to make humans more “fit” by purifying the gene pool of disease, deformity, stupidity, and so forth, on the assumption, of course, that these traits have a genetic basis.

If you look for “Darwiny” elements in this, I suppose you can find them. You could say that selective breeding of any kind is a “mimicking” of the natural selective processes Darwin codified. And granted, Darwin drew an analogy between natural selection and animal husbandry, to help readers understand the former on the basis of something with which they were already familiar. I suppose with some leap of the imagination you could read into this an analogy between human natural selection and human “husbandry.” But to suggest—even openly—an analogy between human evolution and human “breeding” is hardly to suggest that humans be bred. It is just to say the two processes are alike in some respect.

Also, why would the “engineers” think they had to artificially produce mechanisms that are already working naturally? If they were true Darwinists, wouldn’t they predict the unfit would simply die out on their own? A strict Darwinism, it would seem, would obviate the need for “engineering” altogether.

Stein hints at an answer: The film rolls a clip from a Nazi propaganda film in which the narrator argues, “We [Germans] have transgressed the law of natural selection in the last decades” by permitting inferior members of the species (ostensibly Jews, the disabled, etc.) to survive and reproduce.

The suggestion is that in “the wild,” these inferior beings would die sooner and reproduce less, and thus not threaten the gene pool significantly. But modern culture has interfered with these “natural” self-correcting processes. Now we coddle and tolerate inferiors. Modern medicine—corrective surgery, wheelchairs—compensates for what nature has denied them. And liberal politics have removed the competitive pressure for resources by universalizing access to health care, education, suffrage, etc. This means we will cease to evolve (and possibly “devolve”) unless we act the role of nature and remove these genes ourselves.

“Social engineering” doesn’t follow from Darwinism

I guess this is the link Stein wants to draw. But there is nothing authentically Darwinist about this story. Here are the main problems I see with trying to hang “social engineering” on the theory of natural selection:

(1) Darwinism offers a description of a natural mechanism. Even if that description is false, and natural selection doesn’t exist, it is unclear how any description alone could license specific human behavior. The claim that natural selection operates upon organisms in no way implies that we should operate in “the same” way, or engineer things to bring about “the same” effects that it would. Likewise, the theory that cancer causes mortality does not license us to mimic this effect by killing cancer patients.

Darwinist “fitness” is not a moral category; it is not “good” or “right”—it just is. Getting from “disabled animals naturally die out, enhancing fitness” to “we should kill disabled people to enhance our fitness” requires a moral link that is not found within Darwinism itself, but which must be imposed from the outside.

(2) There is no such concept as “fitness” per se. An organism is fit only in relation to a specific environment. Someone that is fit in one environment might not be fit in another. So it makes no sense to say that “unfit” humans are being artificially kept alive in a modern environment; this can only mean that the modern environment—with wheelchairs, welfare, etc.—has now made them fit. Similarly, a person with 20/200 vision might be unfit in a hunter-gatherer tribal environment, but is no longer so in an environment with eyeglass technology. The new environment simply does not select these persons as unfit.[2]

Once more, anyone wishing to rectify our “transgression of natural selection” is not only wrong but reaching beyond the clear bounds of what is authentically Darwinist.

(3) Even if Darwinism did license killing “the unfit” among us, who says that the targets of the Holocaust and eugenics are “naturally” less fit than anyone else? The social engineers had crazy ideas about what constituted fitness. For example, eugenics was driven by unscientific, “Dickensian” ideas correlating poverty with genetic-based human weakness and moral degeneracy. Likewise, the Nazis had poor theories about “blood purity” which deemed the Jews as less fit than others.

These theories are just false. It is simply not the case that poverty stems from “weak genes,” nor that without modern, liberal German amenities, the Jews would die “naturally” before reproducing. Nor do these views have any part in Darwinism; they had to be grafted on from outside.

(4) Assuming the effects of natural selection could be mimicked artificially, this would endorse social engineering only on the insane assumption that everything which can be done ought to be.

Still, putting aside the moral question, it is unclear that “mimicking” natural selection is even possible. Who knows whether our idea of fitness is the same one “nature” would select for, were its modern constraints removed? Even if they match, it is unclear that the goal of fitness can be achieved in a non-counterproductive way.

For instance, as virtually every evolutionary biologist believes (and Darwin seems to), empathy and altruism are themselves evolutionary adaptations. They have a useful social function, on which the reproductive success of the group depends. Mimicking natural selection would require abandoning these behaviors, risking our becoming less fit in the longer term. Trying for group fitness is probably a contradiction in terms.

Conclusion

Any attempt to link Darwin to the horrors of social engineering has to overlook that he explicitly and unambiguously argued against the practice himself. In The Descent of Man, Darwin strenuously condemns this on purely moral grounds. (The quote, with emphasis, and the analysis which follows, comes from this article in Scientific American.)

Darwin writes:

“The aid which we feel impelled to give to the helpless is mainly an incidental result of the instinct of sympathy, which was originally acquired as part of the social instincts, but subsequently rendered, in the manner previously indicated, more tender and more widely diffused. Nor could we check our sympathy, even at the urging of hard reason, without deterioration in the noblest part of our nature. The surgeon may harden himself whilst performing an operation, for he knows that he is acting for the good of his patient; but if we were intentionally to neglect the weak and helpless, it could only be for a contingent benefit, with an overwhelming present evil.”

As SA points out, Expelled shadily quotes the passage immediately preceding this one in Descent, alleging its support for social engineering. They ignore the one which contradicts that interpretation entirely—though Stein and the producers had to know of its existence.

[See part 2 of this review, and redux.]

Notes

[1] The most Expelled could contend is that both Darwin himself and the “social engineers” misapplied Darwin. Still, this lets Darwin off the hook for anything more than, again, the technical “paper error” of over-explanation. The moral error falls squarely on the heads of the social engineers.

[2] Behind the social engineering theory is an untenable (and un-Darwinian) nature-versus-culture dichotomy. The reason is makes no sense to say we blocked the action of “nature” is because “we” are as much a part of nature ourselves. Indeed, our action changes the environment, and thus the direction of selective pressures on ourselves, but this hardly makes it “unnatural.” All organisms do this very thing. We could change our behavior, and thus the direction of selection, but this wouldn’t be a return to behaving “naturally.” It would just be a different “natural” behavior than the old “natural” one.

On the “pragmatic” argument for special creation

[Job training has distracted me from blogging for several weeks. It appears I might be back.]

[I’ve been on an evolution kick. Read Jerry Coyne’s “Why Evolution is True” if you can. It covers all the angles and reads like a novel.]

man_is_but_a_worm_hds_sm1

We’ve all heard the sentiment that our having evolved from “lower” animals is less ennobling than special creation by deity; that it decreases our special worth or dignity.

Sometimes this sentiment is worked up into an argument—as when Darwin’s contemporaries said that one had only to look at the queen to know she just couldn’t have come from a monkey. This is “pragmatic creationism”-proper.

Today, this idea rarely appears as an explicit plank in the creationist’s case, but the feeling persists that evolution degrades humanity, existentially speaking. And I suspect this feeling motivates the creationists: Among this group, there is a broad spectrum of evolutionary phenomena that is admitted: Young-earthers accept no evolution whatsoever; more liberal types accept micro-evolution; others accept macro-evolution nearly across the board; still others (the “irreducible complexity” folks) accept that species macro-evolve, but deny that biochemical cellular processes do. But every creationist stops short of human (macro) evolution.

Two responses leap to mind:

First, it is simply wrong equate the origins of a thing with the thing itself—or the goodness or badness of the one with the goodness or badness of the other. (This is the logician’s “fallacy of origins.”) To say that B came from A is hardly to say that B is A, or is even like A. Granted, Bs often look like their As, and not by accident; sons are somewhat like their fathers, and so forth. But the quality and degree of this identity, and what worth to attach to it, must be investigated, not merely assumed.

Second, “wishin’ don’t make it so.” Assuming we would be made “lower” by lowly origins—why should the facts succumb to our feelings about them? Our origins do not become one way because we would be uncomfortable with alternative scenarios. I may be upset at being the product of an alcoholic father, or an incestuous relationship, but this emotion hardly changes the case.

Darwin weighs in

Darwin himself gave a cleverer answer in the conclusion to his The Descent of Man:

“I am aware that the conclusions arrived at in this work will be denounced by some as highly irreligious; but he who denounces them is bound to shew why it is more irreligious to explain the origin of man as a distinct species by descent from some lower form, through the laws of variation and natural selection, than to explain the birth of the individual through the laws of ordinary reproduction.”

In other words, if it is off-putting that our entire species should have emerged from a “lower” form, it should be at least as off-putting that each individual human develops from a lower form in the womb—a form much lower than any adult “ape” (i.e., pre-human hominid). And yet this development is something no creationist would deny, nor feel particularly bad about.

Worse, on the road to “higher” humanity, the human embryo has to escalate through multiple lower forms: We begin as fish do, with gills, a tail, and a fishlike branchial circulatory system. Then we go through an amphibious, then a reptilian stage. Before becoming recognizably human, we have a brief “lower primate” stage, when the fetus becomes entirely covered with a coat of fine hair called “lanugo.” (We shed this before birth, while monkeys retain it.)

* * *

Our embryonic staging is itself evidence for evolution, in the sense that the story makes better sense on an evolutionary view than on a theory of special creation. Per Haeckel’s famous “ontogeny replicates phylogeny” dictum, the sequence of embryonic stages mimics the sequence of major evolutionary stages through which our species evolved. First we were fish, then amphibians, reptiles, etc. (All other evolved organisms show this pattern also.)

Evolution typically proceeds by addition, or accretion; it is easier (that is, more conducive to fitness and survival) if nature “tacks on” a new feature than to remove one and hope that the remaining ones work around the gap. Of course, this means tacking on a new gene which codes for the feature. In the womb, we develop in ways our “lower” ancestors did (or do) because we have inherited strings of their old genes, the ones that code for their development. We don’t keep the reptilian, etc. features because we have acquired other genes which turn the old ones off before birth.

It is weird enough on technical grounds that a creator would place genes for gills, fur, etc., inside us, only to deactivate them before birth. Instead, s/he could have started us as Aristotelean homunculi: tiny, intact humans that do nothing but grow in size.

On moral grounds, it is pointless that a creator would, in a bid to secure our “specialness,” forbid us as a group to develop from lower beings, only to force each of us individually to develop this way.

Conclusion: The aesthetics of descent versus special creation

The main error of the “pragmatic creationists” is to mistake an aesthetic preference for reality. This aside, is it really preferable on aesthetic grounds that we should have been created, rather than to have evolved?

We are the culmination of a vigorous natural epic, billions of years in the making, one that could have gone in a billion other ways, but didn’t, and that will continue beyond us in ways that we help determine. This is simply more interesting than our having been dropped here, without papers, without biography, without a legacy. For all the reasons “God did it” is a scientific non-starter, it also makes for a piss-poor narrative.

Perhaps our being formed by the same processes as hagfish, dung beetles, and leeches decreases our “specialness” among them. But “being the best” is hardly the only value. And it isn’t always that valuable. It’s lonely at the top. To believe ourselves fundamentally, irreducible set apart from the world can be—it should be—profoundly alienating. An evolutionary story recovers for humans a sense of “at-home-ness” in the world. It permits us to belong and identify.

Not to say that our special place totally dissolves. For we among all “beasts” can reflect upon our mutual heritage. We alone can write the story down. (We alone can blog about it.)

Finally, in poetic terms, our evolution represents a kind of achievement. “We” have struggled, and triumphed. Typically, we praise and admire achievements over charity. (For this reason, heirs and contest winners are always resented by those who “earned” it.) To sit atop this long struggle is arguably more “special” than to have been given the damn thing. Again, this could be said of other species, too, but again, only we can appreciate it.

Speaking of C.S. Lewis and his “trilemma”

I have had recent occasion to reference C. S. Lewis and his famous “lunatic, liar, or Lord” gambit. Lewis presents an argument in the form of a “trilemma” (this being a choice between three unpalatable options—a dilemma plus one). He addresses his argument to those many non-Christians who maintain that Christ was a “great moral teacher,” though not God. He attempts to show that this position is contradictory.

In brief: Christ himself claimed to be God. So if he is not God, he is either lying (knowing he isn’t God but saying he is) or crazy (thinking he is God when he is not). In either case, he cannot be a “great moral teacher”—for a liar isn’t “moral” and a lunatic isn’t “great.” The only way out is to accept that Christ is the God he said he was.

Criticism of Lewis usually goes the path of denying (i.e., attacking the premise) that Christ claimed to be divine. This works best when it comes to Christ’s more explicit (supposed) claims to divinity; some of these are almost certainly mistranslated or interpolated by Christians after the fact [1]. It is harder to discount Christ’s more implicit claims to Godhood—say, in his forgiving sins committed against third parties (something, Lewis points out, can only be done by the third parties or by God himself.)

Other critics attack the logic of the argument. Some deny that Christ existed, even as a historical personage, expanding the trilemma into a “quadrilemma”: “Lunatic, liar, Lord, or myth.” (Some of Lewis’s defenders accept this posing of the problem, admitting Christ’s nonexistence as a logical option while denying the truth of that option.) This tack has appeal and is probably my own view; philosopher Michael Martin mounts a vigorous, independent defense of the “Christ as myth” view in chapter one of his “The Case Against Christianity.”

But it isn’t necessary to believe Christ never existed to diminish the force of the trilemma. To the “myth” angle I would add a few of my own, which don’t depend on such a “radical” conclusion:

(1) We might expand even further to a quintilemma: Christ could have simply been mistaken about being Divine.

The “simple” here is relative: While it may seem far-fetched that a (sane) person could think himself God, this is inexpressibly less far-fetched than that there could be such a thing as a God. And this is less far-fetched still than that there could be a man that is also God. (A man by definition cannot be God, just as he cannot by definition be a duck or piece of twine; what it means to be a man rules out these possibilities on logical grounds.)

Indeed, there is probably no delusion that it is impossible for a sane person to maintain. For “lunacy” is not defined by a single fixed, false belief; nor is one such belief sufficient evidence for a “lunacy” diagnosis.

(2) On the other hand, it is simply unwarranted to assume that having a mental illness, a delusion, is incompatible with being a great moral teacher. Certainly, we know of “great teachers” of subjects other than morals—John Forbes Nash of mathematics fame, for instance—that suffered serious fixed delusions while doing the very work that makes them “great.” Why should morals be any different in principle? (Again, we don’t have to think a “lunatic” moral teacher is very likely—just that it is easier to believe in than a man-God.)

(3) The notion that Christ is a “great moral teacher” is also suspect. First, two points to keep in mind:

The burden of proof for “greatness” must be very high. We cannot mean “great” in the sense that our uncle is a “great” chef or our own fifth grade teacher was “great” with kids. Christ is a moral teacher for the ages; he is historically great.

Second, I am assuming Lewis’s “great” does not mean “influential,” “lasting,” or any other term which refers to the effect or reception of Christ’s teaching rather than the content of the teaching itself. For surely, though a popular moral teacher can be “great”—and part of what makes him great accounts for his popularity—it is not popularity which makes him great. We have to assess what he says, and how he says it. A great moral teacher will have great moral teachings. So “great” will mean “profound,” or something like that.

This in mind, it is doubtful Christ’s teachings qualify. The Golden Rule, from what I can tell, has an analogue in every other religion, including the Judaism Christ ripped it from. If teaching the tenets of one’s own religion makes Christ “great,” every Christian who teaches it after him is “great” as well.

And the Rule isn’t so much an ethical point, as a presupposition of doing any ethics at all: It is a call to put ourselves in the other’s place—to be empathetic. But we do ethics to find out how to be empathetic, not to know to be empathetic; you already have to know to be empathetic to be interested in doing ethics in the first place! A morals teacher asking his pupils to empathize is no more profound than asking them to “be good,” or to “do the right thing.”

Overall, Christ’s body of teaching is not particularly rich. It mostly takes the form of injunctions or “points” rather than arguments. There is nothing like a “system” of thought. There are statements but not much reasoning among the statements. We aren’t told why we should do what he says to do. When you place this alongside the work of a “great moral teacher” like Aristotle, who predates Christ, with his ethical literature, its comprehension and detail, there is no comparison. He offers a methodology for knowing when something is right or wrong, organizes the virtues in a broad taxonomy, and situates it all within an interdisciplinary system of thought. If both Christ and Aristotle are “great,” the spectrum of “greatness” is so top-heavy as to be unusable.

(4) Finally, the same trilemma can be made of other figures and placed before Lewis and the Christians themselves. Plenty of persons other than Christ have claimed to be God. Some of these offer no more evidence of “lunacy” than Christ himself; and of these, some are religious leaders who hold as much claim to “great moral teacher” as Christ. That is, if Christ can “pass” as sane and “great” and honest, so can these. By Lewis’s reasoning, then, all these men and women are God. He is polytheistic, or his reasoning is bad. (Of course, his reasoning is bad.)

Notes

[1] See theologian John Hick’s “The Metaphor of Christ Incarnate.”

[2] Lewis’s contention that a mere man who thinks he is God is “on the level with the man who says he is a poached egg” draws a poor analogy. For one, the belief that one is God is at least a coherent one (at least, as coherent as the idea that a man, some man, could be God). But the belief that one is a poached egg is very unlike this. The very idea is contradictory: Since a poached egg cannot have thoughts, a man who is one could not have the thought that he is one. (Therefore, if he has the thought that he is a poached egg, he cannot be a poached egg.) The belief carries within itself the means of its own discrediting. Unlike the belief in one’s own Divinity—which can be validly held by at least someone (namely, the one true God)—one cannot believe oneself to be a poached egg without holding a lot of other fucked up (crazy?) beliefs at the same time (beliefs about the nature of objects, etc.)