Monthly Archives: February 2010

Horowitz versus Chomsky on the best way to get rid of a dictator

To harp on a theme, I hate those abuses of language which are just cute enough to be dangerous. The latest to come across my digital desk is from an old article in the Jewish World Review, authored by the slimy ex-socialist David Horowitz of FrontpageMag.

Horowitz chronicles an argument between himself and still-socialist sociology prof. Maurice Zeitlin. He sees a contradiction in Zeitlin’s being opposed to both Saddam Hussein and the 2003 invasion of Iraq by the US and others.

This phrase stuck in my gullet:

This cri de couer begs the most important question: What does it mean [for Zeitlin] to oppose Saddam Hussein’s “execrable regime” and at the same time to oppose the effort to change it?

Reread those last five words. I know Horowitz used to have better politics, but this comment is just fucking stupid. Yes, Zeitlin opposed the 2003 invasion of Iraq, which was certainly an effort to change the regime. But was it “the effort”? If Horowitz declines my advice that he take a pottery class, can I conclude that he opposes “the effort to improve himself,” rather than just this particular effort? Horowitz’s use of the definite article snakily suggests that Zeitlin rejects not just the invasion, but the very effort—that is, the idea of an effort being exerted at all—to change the regime.

Horowitz’s implication is doubtful in the highest. Zeitlin would not have opposed every imaginable effort to overthrow Saddam. Suppose Saddam had agreed to step down voluntarily. Let us further assume this was done according to some benign process which did not create a chaotic vacuum of power or other seriously bad outcomes. (Maybe S.H. converted to liberal democracy and had himself jailed—or something.) Surely, Zeitlin would not have excoriated Saddam for failing to remain in power. (Below, we will consider another scenario which he would have supported.)

Further, at any given time before 2003, there were other, actual “efforts” afoot to change the regime. (Indeed, the US intervened to crush a few of them.) Would Horowitz consider any of these, in their time, the effort to change the regime, requiring our support on pain of being numbered among Hussein’s apologists?

Add to this plurality of actual efforts any number of potential ones that might have been dreamed up: Suppose that in February of 2003, a crazy billionaire had dropped babies armed with pink umbrellas into Baghdad to fight the Republican Guard and topple the regime. Babies can’t fight with umbrellas, you say?—The billionaire has cast a spell which he feels strongly will allow them to. Surely this is an effort—somebody’s effort—to change the regime. Would it become the effort, then, demanding our allegiance?

In sum: Surely opposing some bad thing does commit to just any old “effort to change” it; just any solution someone can pull out of his ass doesn’t become a referendum on how authentically we oppose the thing needing changing.

The question is, rather: Is it a good effort, a sensible effort; one that can be reasonably assumed to (a) work, and (b) do so in a non-counterproductive way (that is, in a net sense of not creating so many bad, unintended outcomes that the overall outcome, even with the met goal, becomes bad). It should also (c) be better than other possible schemes to accomplish the same outcome.

The 2003 effort to remove Saddam has (a) “worked” in the meagre sense that it did remove him. But is has been (b)  counterproductive in the more important sense of exacerbating all of those factors that supposedly made removing him a good idea. I don’t want to take this space to make that point fully. Just to note:

*Instead of ending one WMD regime, the war has set two others (Iran and North Korea) in motion.

* The war created a jihadist enclave in the one place in the region where that threat had been completely pacified. As I have noted elsewhere, this was not the result of drawing in terrorists from other locations but of making new ones. Terrorist attacks against Westerners have spiked since the invasion. The balance of “our own” reports (Pentagon, State Dept., FBI, CIA, etc.) blame the War on Terror for this.

*The occupiers have killed and jailed far more innocents than Saddam. The Iraqi government remains a police state, complete with nightly curfews in the capital, bans on public assumbly, and the like. It has the worst human rights record in the region and is dollar for dollar its most corrupt.

*The war completed the process, begun with the sanctions, of bombing into the 3rd World what used to be the most technologically, economically and socially advanced nation in the Middle East. It is difficult to think of a welfare index which is not much, much, worse than before the war.

*Skilled human capital needed for reconstruction has fled en masse to the West with the middle class diaspora. The US has wrenched control of domestic oil away from Iraqis themselves toward “production sharing agreements” which get the oil flowing at the cost of redirecting its proceeds away from national development.

* * *

My main point is: (c) Was there another, a better option for removing Saddam? Will there be with the next guy? As Noam Chomsky has many times noted: Thug leaders who enjoy the support of the US are typically overthrown from within—at far less human cost than an outside force would inflict. Examples include Ceaucescu, Suharto, Marcos, Duvalier, Chun Doo Hwan, and Mobutu. In the case of Saddam, the US withdrew economic and diplomatic support on the eve of Gulf War and pinched Iraq with the severest sanctions regime in history. This course of action hurt precisely everyone in Iraq except the regime. It forced the population to cling to Saddam for survival, weakening the possibility for opposition currents to thrive. There is no reason to doubt the typical pattern would have held had the US taken a more “hands off” course.

The flaw in “racial profiling” for terrorists

Watching old footage of Ann Coulter turned up an argument heard in this country many times since 9/11. Coulter begins by citing patterns in the terrorist demographic. She lists bombing attacks in which Americans have died, concluding, “The perpetrators have all had the same eye color, hair color, skin color and half of them have been named Muhammad…This is not racial profiling; it’s a description of the suspect.”

Of course, she is being characteristically cheeky. She advocates “racial profiling” by name on the basis of this demonstrated pattern. In short, a terrorist is more likely to come from x-racial-group than from y or z groups; therefore, we are warranted in searching for terrorists among that group particularly. This might include singling out men fitting this description for random baggage or ID checks in subways, or funneling them through a separate check-in line at airports.

For the sake of argument, let us assume that Coulter’s data on terrorists is correct (though it isn’t); it is the form of the argument that is most suspect. It gives too much weight to the (statistical) relationship of racial groups to one another—that is, to the relative percentages of terrorists within each group. If we’re thinking properly, what should matter instead is the relationship of terrorists to their own racial groups.

The important point is not what percentage of terrorists are ‘middle eastern males with funny sounding names,’ versus males of some other group—but rather, what percentage of ‘middle eastern males with funny sounding names’ are actually terrorists.

Analogy #1: For all I know, men named George are .000003% more likely to be serial killers than men with other names. This would this hardly mean that shaking down a bunch of Georges would be a wise deployment of police resources. Shit, even if 100% of serial killers were named George, those Georges who actually commit serial murder are such a tiny minority among men named George that the strategy would still be suspect.

Analogy #2: Imagine we have a haystack which has some probability of containing a needle. (That is, there is some probability that one of the straws is a needle.) Let this probability match that of a given, random Arabic man’s being a terrorist; drawing a random straw is as likely to yield a needle as “drawing” a random Arabic man is likely to yield a terrorist. Let us assume this method of finding needles is ineffective, counterproductive, even immoral; also, that we have some far better method of finding needles in haystacks—using magnets, X-ray, floating the straw on water so the needle sinks, etc. We still want to root out needles, but have long abandoned the strategy of drawing random straws.

Now, imagine we discover that all along there has been a second haystack nearby which has an even lower probability of yielding a needle than our haystack. Perhaps we discover several more, each with some probability lower than the original, but still more than zero. It has become clear that a needle is more likely to come from the first haystack than from any of the others. Still, it would be irrational in the highest to conclude that we should, on this basis, resume our random straw draws. The simple fact that a less promising haystack exists does not magically make checking this stack a good idea, if it wasn’t a good idea before.

Similarly, the simple fact that terrorists are more likely to come from middle eastern men than from some other group doesn’t mean that the likelihood of randomly finding them among middle eastern men is very good at all.


The obvious question is just how good that likelihood is. I haven’t exactly crunched the numbers; you can do the math if you like. But there are millions of men in the world who fit Coulter’s “profile” and very nearly zero of these commit terrorist acts against Americans. Even fewer do so in those stereotypical ways that profiling would address. Even fewer operate in the U.S., where ours laws can actually penetrate. Clearly, we are dealing with numbers akin to those Georges who commit serial killing. It is quite likely that if we incarcerated every other Muslim male in the world, it would register nothing in practical terms to diminish the odds of the next terror attack. Yes, we can theoretically halve a .000003% chance of something. Getting married later in life will halve one’s chances of committing suicide someday. Buying a second ticket will double one’s chances of winning the lottery. There is shit you could do right now to halve your chances of being brainwashed by a cult or eaten by a mountain lion. Who gives a shit? Differences of this infinitesimal grade should no more drive policy than they drive anybody’s consideration of anything else in the real world.

This is not to mention that radical Islamists come in all “colors” and (duh) will easily work around any profile we make. Plus, racial profiling is counterproductive; it alienates those communities which are most critical for intelligence on the potential attackers that move and live among and gain cover from them. “Profiled” individuals tend to avoid law enforcement as much as possible.

Finally, as with finding needles in haystacks, we have a much better alternative strategy for fighting terrorism. Granted, jihadists will cite a number of gripes against the US if you ask them. Some of these concern cultural factors like our women’s liberation and sexy music and movies. But according to the evidence, these aren’t the “root” reasons they turn to terror. As I’ve argued elsewhere (link below), the violence is a response to US foreign policy in and toward Muslim countries and populations. Thankfully, these concerns are quite reasonable, technically solvable, and are morally “overdetermined”—that is, they should be met for a host of reasons even aside from fighting terror.

[See the last boldface section of the post here.]

On workplace incentives: The ethics of getting extra for what we are already supposed to be doing

[Dedicated to E. N.]

I work at a bank. I’m not a teller but my job description includes running a cash till. There are incentives for balancing—making sure we don’t give away or take in too much money through miscount. After threats from management to do away with these incentives, they finally have. Budgetary reasons aside, the idea is that it is one’s job to balance, and it makes no sense to “reward a person for doing what he is supposed to be doing already.”

This post explores this argument in the interest of workers’ intellectual self-defense.

Hyperbolic indignation

The bankers’ argument should seem odd on its face, for we constantly violate it in practice. In every other avenue of life there is space for praising (i.e., rewarding) people for doing what they should already be doing. “Giving recognition” to those who simply meet the dictates of their roles is widely considered a virtue. What a monstrous spouse, for instance, who never thanked or praised the other for doing their “duty” as husband or wife. The same goes for parents of children, teachers of pupils, close friends, etc. On consideration, I suspect the vast majority of what we praise and reward falls solidly within the region of what one is “supposed to do” already.

The ‘extreme’ cases show the point most clearly. Consider heroes: Most would agree that the lifeguard who dives in to save the drowning child, or the returning soldier who has braved bullets in defense of country, are due some special recognition—paycheck aside. At least, nobody finds it odd or illegitimate to give it. (Indeed, “hero” is a normative term; to apply it to such persons is already to confer special recognition.) Nonetheless, every hero, when interviewed, humbly defends that he only did “what anyone would have done in my position” or no more than “what I had to do.” And he is entirely correct: Despite our high praise, we can agree that, given his circumstances, he had to try and save that child (etc.). He was “supposed to do” it. Indeed, if he had twiddled his thumbs while the child flailed about, or the bullets flew, he would not only be a non-hero like the rest of us, but a sonuvabitch deserving everything from harsh criticism to legal persecution.

Finally, it occurs to me that on a daily basis I recognize my dog for performing such acts as coming to me when I call her; perhaps I even give her a biscuit. She is then a “good dog.” But coming when I call isn’t really “good”; it’s just un-bad. That is, a dog who comes when called is just behaving as a companion animal should; it doesn’t exceed the mandates of what she is “supposed to do.” And yet, each of us understands the praise attending this behavior. It is just what a proper “owner” (yuck) does.

It would appear, then, that the scope of “rewardable” behavior overlaps with the scope what one is “supposed to do.” Some of the latter are also the former. They are not entirely separate dimensions, as the bankers’ argument would have it. At least, nobody really thinks they are. To quote Charles Peirce, “let us not pretend to doubt in philosophy [i.e., when constructing arguments] what we do not doubt in our hearts.”

“Supposed to” versus “have to” or “is expected to”

The question remains: Why do we reward “supposed to” behavior? We do it, but can we justify it?

First, while virtually everyone supports rewarding heroic behaviors, it is actually easier to justify rewards like balancing incentives. This, because unlike a lifeguard’s rescuing the drowning child, a teller doesn’t have to balance. Being out of balance (within reason) is not a fireable offense; almost every teller is out from time to time. (If she weren’t, nobody would ever think to reward her for it.) So every time a teller balances, while she may be doing what she is “supposed to do,” she is nonetheless doing more than she must do.

From another angle: Yes, one is “supposed to” meet one’s job description. But any job description is the articulation of an ideal. As “ideal” would suggest, nobody lives up to it, nor does anyone think they will. This being the case, “supposed to” is a rather hollow basis for an incentive scheme. What one is “supposed to do” is always more than what we expect them to; one ought to balance all the time, but he is not expected to.

In short, we are well within our rights to reward those who exceed expectations, who do more than they have to do. Whether this falls within what they are “supposed to do” is beside the point.

“Bonus” is just a figure of speech

An incentive is presented as a “bonus,” something added to a predetermined wage-base. But let us not make too much of mere words. We can just as accurately describe it as a penalty for not balancing. Instead of a chance to pad my check, maybe I’m just fighting not to lose the last part of it. We have as much reason to look at it that way as the way our employer presents.

At best, then, a balancing incentive is ambiguous; depending on how you look at it, it could be a bonus, or it could be a penalty. Is there anything which might favor one interpretation over the other? I can think of two lines of argument:


The legal convention of contra proferentem (“against the one bringing forth”) says that, when a dispute hangs on ambiguities in the wording of a contract, it goes against the party who drafted the document. This is designed to prevent the deliberate use of language which can be “bent” one way to entice a second party and then “bent” the other way to win a dispute. Law aside, the principle behind this is a good one, with broad application. The party perpetrating an ambiguity—and any dispute stemming from it—should be the very last to benefit from it. The party upon whom the ambiguity was “passively” foisted gets the benefit of any doubt.

In short, when faced with the “bonus vs. penalty” ambiguity, our prejudice should go against the party who set the whole scheme up. In this case, it is clearly the employer.


More important, whenever other kinds of employer “contributions” to wages have been studied, they have been shown to be factored as part of wages themselves.

The clearest example is payroll taxes. A percentage of each worker’s paycheck (15.3%) is owed in payroll tax. In theory, the worker pays half of this, and the employer pays the other half. But this is a “difference that makes no difference.” The employer is responsible for sending 15.3% of each wage to the government—period; it makes no difference that half this amount appears in writing on the paycheck stub.

The question is only whether the payroll tax is ultimately carved out of employer profits, or worker wages. The answer is not controversial; virtually all tax economists, across the political spectrum, agree: The payroll match is anticipated in the decision to hire any new worker, and the wage adjusted downward accordingly. An employer will not hire an additional worker unless the costs associated with taking her on—wages-proper, inputs, additional wear and tear on machinery, etc.—is less than the value of what she will produce. The payroll match has become a “cost of production” like any other, and is accounted as such. And why wouldn’t it be? Let us make not a fetish of bookkeeping; in real effect, the entire tax is deducted from wages.[1] There is no reason to think that balancing incentives work any differently.


The bankers’ argument presumes that the only legitimate reason to give “extra” is when people are “owed” it. This “economic thinking” is encouraged by capitalism. In real life, we reward people for all kinds of reasons—we want to influence behavior; make a statement; keep people around; avoid the consequences of not rewarding; or just to be nice. (Even the bankers’ argument is too narrow. It overlooks the loss-preventive function of balancing incentives: One hesitates to pilfer a twenty from the till if that move will cost her a few hundred come the end of the quarter.)

The idea that we can’t reward work one is “supposed to do” is so contrary to how we behave that it must be ideological—that is, a reflection of the stories capitalism has to tell to live with itself. Namely, the whole system is said to rest on the free and fair exchange of “value-equivalents.” All “factors” of production “deserve” (are bought/sold at) a price in proportion to what they contribute to production. Labor is just another factor; so anything it earns above its price—like a bonus—represents an injustice toward the employer.

A bit of (oversimplified) Marxist theory tells us this story is false. When “free and equal” exchange is generalized, each capitalist just breaks even. Buying at cost and selling at cost gets him nowhere. Nor can this be circumvented by selling at a markup; marking up iron spoons by 10% just “washes” with the 10% markup which the previous capitalist placed on the iron ore sold to make the spoons. The key to profit is to find a “factor” which, like magic, produces value in excess of its own value. Labor alone fits this bill; the quantity of goods it takes to sustain itself (in a given period) is less than what it can produce during that time.[2]

Not only, then, is everything which capitalists call a “bonus” not one; but under the current system, the very idea of a bonus is impossible. Capitalists cannot add anything to the “value” of labor because they don’t pay its “value” in the first place—if they wish to remain capitalists for long.[3] No incentive package will raise a worker’s takings to her real worth. There is no such thing as an “illegitimate” reward, right up to the point that the entire operation is transferred in share to the workers.


[1] Example taken from Paul Krugman, Fuzzy Math: The Essential Guide to the Bush Tax Plan, pp. 42-43.

[2] The Marxist theory of exploitation and the labor theory of value is given in more detail here (Sect. 1 mostly):

[3] By “value” in this sentence I mean value in explicit neoclassical capitalist terms—the value of what labor produces for the capitalist; in truth, the value of labor is whatever the value of the goods it takes to keep the worker alive, which, again, is less than this quantity.