ethics, goodness, metaethics, morality, Rightness, Utilitarianism
Utilitarianism Fails
One of the tools of contemporary analytic philosophy is the thought experiment, an imaginative scenario intended to help us clarify our concepts. Here’s one from Philippa Foot:
We are about to give a patient who needs it to save his life a massive dose of a certain drug in short supply. There arrive, however, five other patients each of whom could be saved by one-fifth of that dose. We say with regret that we cannot spare our whole supply of the drug for a single patient, just as we should say that we could not spare the whole resources of a ward for one dangerously ill individual when ambulances arrive bringing in victims of a multiple crash. We feel bound to let one man die rather than many if that is our only choice.(1)
The concept to be clarified here is moral. Foot assumes that saving five lives rather than one is obviously the morally correct thing to do. But is it?
John Taurek argues that there are cases in which it would be perfectly OK to save one instead of many. Suppose the one is a friend of the person dispensing drugs, and the others are strangers. Taurek thinks that it would be permissible to give it to the one:
Suppose this one person, call him David, is someone I know and like, and the others are strangers to me. I might well give all of my drug to him. And I am inclined to think that were I to do so I would not be acting immorally.(2)
In other words, his personal preference would override the moral obligation. But that means that the moral obligation would be, as he says, “feeble indeed,”(3) so feeble that perhaps it doesn’t even exist. Perhaps there is no moral obligation to save the many rather then the one.
Here we have conflicting moral intuitions. Foot thinks it is morally obligatory to save the many rather than the one, and Taurek disagrees. Is there a way to tell who is right?
Taurek goes on to spin out several variations of the scenario in hopes of further clarifying the issue. What if the one is on the verge of discovering some wonder drug or negotiating a lasting peace? Saving his life (let’s just assume for the moment that the person is male) would have greater benefit to humanity than saving the five, so we should save him. Or what if the one person is just an average guy, but the others are unworthy or deficient in some way. Maybe they are known criminals or brain-damaged infants. Would that change the situation?
Such considerations muddy the water, however. To really clarify the issue we need to compare apples to apples, considering the cases ceteris paribus, as philosophers say, all else being equal. Let’s assume that there is nothing special about any of the people involved. Then further variations of the thought experiment might shed more light. The way these experiments work is to take a situation and vary the details and see what emerges.
What if there were only one other person instead of five? Then there would be no moral obligation to favor one over the other. If that one person were our friend, it would certainly be OK to give the medicine to him. If we knew neither one, then we could just flip a coin. There would be no moral issue at all.
But the case of one versus many seems to be different. Many people, Foot among them, think there is an obligation to save the many because, in Taurek’s words, “it is a worse thing, other things being equal, that these five innocent persons should die than it is that this one should.”(4)
Let’s pause for a moment here to look at the language Taurek uses. We’ve been talking about moral obligation, which is in what I all the Rightness paradigm. It uses the terms “right” and “wrong” and their synonyms to evaluate actions.(5) The Rightness paradigm has to do with moral imperatives, rights and obligations; and these are couched in terms of what is right to do or refrain from doing. Now Taurek gives a justification for the supposed obligation in terms of goodness (“a worse thing”). Clearly he is referring to a Utilitarian view, that what one ought to do, morally, is to promote the greatest good for the greatest number of people.(6) Goodness language is different from rightness language, and we need to look closely to see if the intersection of the two actually makes sense.
To this end, Taurek tweaks the thought experiment slightly:
Suppose the drug belongs to your friend David. It is his drug, his required dosage. Now there are these five strangers, strangers to David as well as to you. Would you try to persuade David to give his drug to these five people? Do you think you should? Suppose you were to try. How would you begin? You are asking him to give up his life so that each of the five others, all strangers to him, might continue to live.(7)
The Utilitarian view is that David should indeed give up his life because five deaths are worse than one. (This assumes that death is a bad thing, which we can dispute, but let it go for now.) We tell David that making five people die is a worse outcome than just one. And he responds, “Worse for whom?” His own death is worse for him, and the death of each of the five others is worse for that person. But so what? David says,
I wouldn’t ask, nor would I expect, any of them to give up his life so that I, a perfect stranger, might continue to live mine. But why should you, or any of them, expect me to give up my life so that each of them might continue to live his?(8)
Taurek frames the controversy in moral (rightness) terms. He says that in keeping the drug for himself David wrongs no one. None of the five has a legitimate claim on the drug, and so the five together have no right to demand it. And this justifies his own giving the drug to the one rather than the many:
If it is morally permissible for David in this situation to give himself all of his drug, why should it be morally impermissible for me to do the same? It is my drug. … I violate no one’s rights when I use the drug to save David’s life.(9)
But Taurek is no different from anybody else in this regard.
And so I feel compelled to deny that any third party, relevant special obligations apart, would be morally required to save the five persons and let David die.(10)
So, what to do in this situation? Taurek suggests just flipping a coin.
Why not give each person an equal chance to survive? Perhaps I could flip a coin. Heads, I give my drug to these five. Tails, I give it to this one. In this way I give each of the six people a fifty-fifty chance of surviving. Where such an option is open to me it would seem best to express my equal concern and respect for each person. Who among them could complain that I have done wrong?(11)
At this point many of us might balk. Regardless of such casuistry, isn’t the suffering of the many five times worse than the suffering of the one? Taurek’s answer is No, it is not worse in any absolute sense. There is no absolute goodness or badness here, only goodness or badness for individuals.
Taurek’s problem with Utilitarianism is that individuals are separate beings, and you can’t sum up their pleasures or pains. Let’s assume that the death of any of the six would cause suffering, not for themselves because they’ll be dead, but for their friends and family. The death of five would cause more people to suffer than the death of one. But, says Taurek,
Suffering is not additive in this way. The discomfort of each of a large number of individuals experiencing a minor headache does not add up to anyone’s experiencing a migraine. In such a trade-off situation as this we are to compare your pain or your loss, not to our collective or total pain, whatever exactly that is supposed to be, but to what will be suffered or lost by any given single one of us.(12)
The death of one is no better or worse than the death of any of the five, and you can’t compare the one with the five as a whole because you can’t really add up the suffering of the five individuals. So Utilitarianism doesn’t work to justify saving the many rather than the one.
And that is Taurek’s conclusion. We are under no obligation, he says to save the many rather than the one. (He doesn’t think that we must save the one rather than the many, only that we may.) If we have no personal interest in the outcome—if none of them are friends, for instance—then we might as well flip a coin. And yet, rational or not, most people think that saving the many is what we should do. Flipping a coin feels a bit cold.
Let’s stop and take stock of the argument so far. What have we learned?
First, that intuition is a poor guide to reality. Which should we save, the one or the many? Some of us have the intuition that we are obliged to save the many. Others think we are not so obliged. The problem is that there’s no way to tell which intuition is right; there’s no objective measurement, nothing we can observe to answer the question. (The lack of a reliable method is evidence for moral anti-realism, the idea that there are no objective moral values or normative facts at all, but that’s a topic for another time.)
Another thing we have learned is that we can’t get a satisfactory answer to the question within the Rightness paradigm. Taurek tries. He does a good job of deflating the Utilitarian position and argues cleverly on legalistic grounds, looking at who is wronged and who has a valid claim, that we are not obliged to save the many. But recent research shows that by far the majority of people when faced with an actual dilemma, not just a thought experiment, do in fact act to save the many.(13) The Rightness paradigm can’t deal with this fact other than to figuratively throw up its hands and say that people aren’t rational. Which is true, but we aspire to be.
Fortunately, we are not stuck with this conundrum. There is another way to think about problems such as this, to frame the discussion in terms of goodness rather than rightness.(14) The Goodness paradigm, as I call it, uses “good” and “bad” rather than “right” and “wrong” to evaluate actions. It frames issues of what to do in terms of harms and benefits, that is to say, consequences, rather than moral rules. And an ethic rooted in the Goodness paradigm not only tells us that we should try to save the many but also gives us reasons why. But first we have to understand a bigger picture.
One of the basic facts about all things and persons is that everything is related to everything else. Nothing exists in isolation. A change in an organism affects its environment, and a change in the environment affects the organism. This is easy to see in our case. We humans are creatures whose essence is Mitsein, as Heidegger puts it, being-with.(15) It’s not merely that we sometimes or even often find ourselves in the proximity of others. Rather, in every facet of ourselves we find a connection with other people. Ethologists call us “obligatorily gregarious.”(16) We must have ongoing and extensive contact with our fellows in order to survive and thrive.
Here’s how this idea plays out in the conundrum of whether to save the one or the many. In fact, there is a way that the suffering of the many is additive. We feel empathy for others; we can imagine their pain and feel it, in an attenuated form, as if it were our own. Of course, there are limits. We easily feel empathy and compassion for individuals and small groups, probably for evolutionary reasons, that we evolved to live in tribes of 20 or 30 or so. It’s harder to feel empathy for a great many people such as those injured in a mass disaster. We get compassion fatigue.(17) But for a group of five, we certainly do feel their pain, and it is greater than the pain of only one. That’s why we feel an urge to save the many.
Given that everything is related to everything else, the Goodness Ethic, as I call it, advises us to try to maximize the good in all situations and to maximize what is good for all concerned.(18) It gives this advice because as we maximize the good of everybody and everything in the environment, we thereby promote our own health as well. This is enlightened self-interest, as opposed to unenlightened self-interest, which seeks to maximize one’s own welfare without regard to the effects of one’s actions on others. Commonly called “selfishness,” such an unenlightened approach is actually self-defeating.
Although similar, this is not Utilitarianism, which we have seen doesn’t give an adequate answer. Utilitarianism, even though expressed in terms of consequences, is actually a form of rules-based ethics. It’s in the Rightness paradigm, not the Goodness. For the Utilitarian, the amount of benefit or harm determines the moral rightness of action, and we are to maximize benefit because it’s our moral duty.
The Goodness paradigm, on the other hand, says no such thing. We are advised (not commanded) to maximize benefit because it’s better for us. So, yes, we should save the many, not because it’s our duty but because in doing so we become better humans.
Notes
(1) Foot, “The Problem of Abortion and the Doctrine of Double Effect,” p. 9.
(2) Taurek, “Should The Numbers Count?”, p. 295.
(3) Idem., p. 298.
(4) Idem., p. 296.
(5) Meacham, “The Good and The Right.”
(6) Wikipedia, “Utilitarianism.”
(7) Taurek, “Should The Numbers Count?”, p. 299.
(8) Ibid.
(9) Idem., p. 301.
(10) Idem., p. 303.
(11) Ibid.
(12) Idem. p. 308.
(13) Engber, “Does the Trolley Problem Have a Problem?”
(14) Meacham, “The Good and The Right.”
(15) Heidegger, Being and Time, p. 160, translator’s footnote 2.
(16) de Waal, Primates and Philosophers, p. 4.
(17) Dholakia, “How Long Does Public Empathy Last After a Natural Disaster?”
(18) Meacham, “The Goodness Ethic.”
References
de Waal, Frans. Primates and Philosophers: How Morality Evolved. Princeton: Princeton University Press, 2006.
Dholakia, Utpal. “How Long Does Public Empathy Last After a Natural Disaster?” Online publication https://www.psychologytoday.com/us/blog/the-science-behind-behavior/201709/how-long-does-public-empathy-last-after-natural-disaster as of 18 March 2022.
Engber, Daniel. “Does the Trolley Problem Have a Problem?” Online publication https://slate.com/technology/2018/06/psychologys-trolley-problem-might-have-a-problem.html as of 18 March 2022.
Foot, Philippa. “The Problem of Abortion and the Doctrine of Double Effect.” The Oxford Review No. 5 (1967), pp. 5-15. Online publication https://spot.colorado.edu/~heathwoo/phil3100,SP09/foot.pdf as of 12 March 2022. Reprinted in Foot, Virtues and Vices (Berkeley: University of California Press, 1978), pp. 19-32).
Heidegger, Martin. Being and Time. Tr. John Macquarrie and Edward Robinson. New York: Harper and Row, Harper-SanFrancisco, 1962.
Meacham, Bill. “The Good and The Right.” Online publication https://www.bmeacham.com/whatswhat/GoodAndRight.html.
Meacham, Bill. “The Goodness Ethic.” Online publication https://www.bmeacham.com/whatswhat/GoodnessEthic.html.
Taurek, John M. “Should The Numbers Count?” Philosophy and Public Affairs Vol. 6, No. 4 (Summer, 1977),
pp. 293-316. Online publication http://pitt.edu/~mthompso/readings/taurek.pdf as of 2 March 2015.
Wikipedia, “Utilitarianism.” Online publication https://en.wikipedia.org/wiki/Utilitarianism as of 18 March 2022.
From → Philosophy
I take morality more in the sense of “How is Good to live your life”.
The most concerning issue for Utilitarianism, for me, is this mind experiment: is it right to rip organs from a healthy person to save 5 that need transplant? It is similar to taking the meds from one to save 5, however it is even more morally repugnant for me.
The solution I see is to utility-optimize for something greater than “sum of welfare of individuals”. We can optimize for long term Humanity endurance – this is the ultimate Good I can think of.
With this approach, ripping organs from a healthy person is not only morally repulsive, but also it is likely to create a worse society, that lesser Humanity chances for long term survival – even after Earth becomes inhabitable.
This approach is not so normative as the Righteous approach, it is more at the Goodness side. Maybe too many people on Earth is very bad after a limit. They might suffer because of hunger in the future.
Some people might be worth living more than others in the rare situations when you have to choose. However, we need some predictive laws that protect people, at least for having a functional society. It’s like Utilitarianism for the long-term. We should be very careful to not justify morally repugnant laws for our intuition, otherwise society would collapse.
While I am profane in Philosophy, it would be an honor for me if you could read some of my philosophical ideas in Secular Morality
One well-known argument regarding abortion asks you to consider whether or not you are morally obligated to hook your body up to another’s for nine months in order to save that other’s life. (Which I guess could be finessed by qualifying that it neither comes at zero physical risk to you nor guarantees a successful outcome for the other.)