Just Curious

Please state the answer in the form of a question... Just Curious is the occassional blog of Andrew Nelson. In an attempt to balance the polemical tone of most of the blogosphere, all entries hope to pose at least one useful question. Many entries simply advance useful memes. Personal entries may abandon the interrogative conceit.

Thursday, January 26, 2006

what effect should statistical probability have on moral choices (if any)?

I was intrigued today by this letter to the Tribune today. Here are a few of the more interesting paragraphs:

All civilized legal and moral systems operate on the principle that people who carry out an action with predictable results must be held accountable, as if they had intended those results. Therefore the statistically predictable innocent deaths that U.S. military attacks routinely and inevitably produce must be regarded as intended, if we take seriously the concept of moral agency and responsibility.

A truly unpredictable death is different. If an enemy tank in combat was unexpectedly harboring a lost child when attacked, we wouldn’t regard the child’s death as intended, since children cannot be predicted to be in a battlefield tank.

But the U.S. attack on Iraq and Pakistan is far different—the essence of our war on these countries, just as in Vietnam, involves methods that are statistically guaranteed to produce tens of thousands of civilian casualties. The civilian casualty rate in Iraq has been statistically fairly stable (and, thus, predictable) throughout the U.S. invasion, as was true in Vietnam.


I generally agree with the idea that we should have some sort of accountability system for systems with statistically predictable flaws. Workers' safety is a good example. Rather than make every workplace accident a case of individual moral judgment, we make the problem abstract and say, "Well, there are going to be so many accidents a year, so let's set aside a certain amount of money to compensate for them." A similar ideas is behind insurance systems -- at least, when they work well. But you can also see where we could use more of this sort of statistical morality -- in malpractice cases, for instance. Our system would be much more fair to doctors and patients if we handled (most) medical error through some of statistical system rather than focusing on spectacular cases.

But it seems like a tough call when it comes to military decisions. On an individual level, almost no one would say it is right to kill civilians in the name of a military goal, no matter how great. Yet just as with medical error (or, perhaps the more relevant case, police policy), a certain number of civilian deaths can be predicted over the long run, no matter what the military policy. I'm skeptical that we can approve or deny military policies based on the fact that *any* civilians would be killed. On the other hand, I don't know if there's any good way of coming up with an "acceptable" rate of error, especially when it might give the military leverage to use tactics that seem immoral. Statistical thinking is helpful in so many situations, though... I wonder if there is a way to use it to think about military policy that does not invalidate war altogether (not that that's not on the table).

6 Comments:

Post a Comment

<< Home