Skip to content

Moral Realism

by Bill Meacham on March 3rd, 2015

At a recent conference at the University of Texas at Austin Jeff McMahan, a distinguished professor of ethics and the author of quite a number of works on the subject, revealed a shortcoming in much of contemporary ethical thought. The context was discussion of charitable giving and its ethical obligations, and the professor posed a conundrum. There appear to be scenarios, he said, in which one is not obligated to do an act that would produce some good, a praiseworthy act—doing so would be supererogatory, above and beyond the call of duty—but once one has engaged in the act, one is obligated to maximize the good one does. Here are some examples:

  • A wealthy person dies and leaves in her will an extraordinary amount of money to her dog. She is not obligated to leave her money to any entity, but once she has decided to give it to somebody or something, it seems morally wrong to give it to a dog rather than to a charity that would benefit human beings. (This is a real case, by the way. The benefactress was Leona Helmsley, and her will made quite a stir in the news a few years ago.)
  • You are confronted with a burning building inside of which are a human child and a bird in a cage. It would be quite dangerous to enter the building, and you are under no obligation to do so. But if you do, and you save the bird instead of the child, people would be justified in blaming you for not saving the child.
  • You find a person trapped under some wreckage. To save him you would have to clear away the wreckage at some risk that it might fall on you. Again, you are under no obligation to try to save the person. But if you do, you have a choice: you can save him by amputating one of his arms or by moving the wreckage, thereby saving both arms. If you save only one arm, people are justified in blaming you for not saving both.
  • You may, if you choose, give money to charity, but you are under no obligation to do so. If you decide to give money to a charity that provides seeing eye dogs to blind people at a cost of several thousand dollars each, you can be blamed for not giving it to a charity that provides cataract operations that prevent blindness at a cost of only a few dollars each, as the latter would provide far more benefit per dollar than the former.

All these scenarios are analogous. They elicit moral intuitions about what is right and wrong, that is, what is morally required, forbidden or permissible. In each case doing something that you are not required to do and that can produce some good puts you in a situation in which you are morally required to choose the greater good. But you could, without blame, have chosen not to do anything. It is the last of the cases above that worried the professor. Faced with the prospect of blame for failure to give to the right charity, you might decide not to give any money at all! That is not an attractive outcome to someone affiliated with charities.

What’s wrong with this picture? Philosophy seeks to find logically coherent principles that describe certain broad aspects of reality, in this case morality. But in these examples the principles are not logically coherent. You can’t be blamed for not doing good, but you can be blamed for doing less good than you could. But doing no good is certainly doing less good than you could. So we have a contradiction: you can both be blameless and blameworthy for doing less good than you could.

When we are faced with a contradiction, it generally means there is a problem with one or more of the premises, so let’s have a look at them. The way to argue from intuitions about cases, according to McMahan, is that first you have an intuition about a particular case and then you generalize from that case to a universal rule.(1) Here is the argument in detail:

  • Premise 1: Doing act A will produce more good than failing to do it.
  • Premise 2: You are not required to do act A, but you are permitted to do it.
  • Conclusion 1: You are not required to do more good than less.

 

  • Premise 3: Act A entails two possible further acts, A1 and A2, which are mutually exclusive.
  • Premise 4: Doing A2 will produce more good than doing A1.
  • Premise 5: Having done (or at least started to do) A, you are required to do A2.
  • Conclusion 2: You are required to do more good than less.

So which premise is faulty? Not 1, 3 or 4, as these are simply facts about the situation, stipulated to be true for the sake of argument, not moral claims. It must be either premise 2 or premise 5, which are mutually contradictory. So it is either false (from premise 2) or true (from premise 5) that you are required to do more good than less. But which is it?

Here is where the shortcoming I referred to above is revealed. Much contemporary moral discourse assumes that there is a moral fact of the matter, a position known as “moral realism.” Moral realism is the doctrine that “ethical sentences express propositions that refer to objective features of the world (that is, features independent of subjective opinion)….”(2) Indeed, in discussion McMahan averred that some actions, such as those in the scenarios above that produce the lesser good, are “objectively impermissible.” A second assumption is that such objective moral rules are logically coherent.

Taking the second assumption first, I suppose someone might argue that objective moral principles do not need to be logically coherent. If so, I reply, they could not provide a reliable guide to conduct, so we might as well ignore them. The practical effect of their being incoherent would be the same as if they do not exist. So we can safely assume that if there are objective moral principles, they are logically coherent.

The problem with moral realism is, of course, how to determine just what those objective moral features of the world are. The professor’s scenarios illustrate the difficulty. Is it true or false that one is required to do more good than less? How can we tell?

Typically, different flavors of moral realism posit different sources of morality and thus different methods of determining what the moral rules are. If you think that moral rules result from the decree of God, for instance, then you will refer to scripture and ecclesiastical authority to find out what they are. If you think that moral rules are simply objective features of the world without reference to their source, then you will rely on moral intuition. And the professor’s scenarios all rely on moral intuition.

But the moral intuitions contradict each other. One intuition has it that we are not required to do more good than less, and the other has it that we are. Could it be that moral intuitions are not a reliable way to find out what objective moral reality is? If so (absent divine decree) we have no way to find out!

No matter which way you look at it—that the moral principles contradict each other or the moral intuitions do—moral realism puts us in a quagmire of uncertainty. We have conflicting moral intuitions but are without a way to resolve the conflict.

Perhaps moral realism itself is the problem. Let’s suppose that the moral features of the universe are not objective, not independent of subjective opinion, and see where that supposition leads us. Well, if they are not objective, then what are they? We do, after all, have moral intuitions. What are they intuitions of?

They are indeed intuitions of moral rules, but the rules are socially constituted rather than independently existent in the way physical objects are. By “socially constituted” I mean that within a community of practice, a social group, a culture or a society everybody agrees (more or less) on what they are, everybody treats them the same way and everybody acts as if they are real. So, for members of such a community they are real. Their reality can be seen in their effects. People really would not blame you for staying out of the burning building, but they also really would blame you for saving the bird rather than the child once you are in there.

Unlike supposed objectively existent moral rules, however, there is no requirement for socially constituted rules to be logically consistent. People are not, by and large, logically consistent all the time, as evidenced by everyday observation and lots of social and psychological research.(3) Social conventions evolved as humans learned to live with each other in groups. They were not derived via logical inference from first principles.

Are they then as unreliable as incoherent objective principles would be? No, because, like most of human activity, they are context dependent. If you are outside the building, you are blameless for staying there. Once inside, however, you can be blamed for failing to save the child. Different contexts have different rules. Human psychology is full of such context-dependent heuristic rules. We would be in sorry shape if we had to reason out, step by step, everything we had to do. Those proto-humans who tried that approach did not become our ancestors.

From all this it appears that considering our moral intuitions as revealing a socially constructed morality makes more sense than as revealing an objective reality. (Of course they are both objective in the sense of being independent of any particular person’s subjective opinion; but socially constructed morality may vary from culture to culture, whereas on the moral realist view morality does not vary.)

The proponents of moral realism can object to this conclusion by claiming that a premise has been overlooked: You are permitted to do less good than more if doing more would entail some considerable risk or cost to you; otherwise you must do more good. When you are outside the building, it would clearly be more risky to enter the building than not. But once inside, neither alternative is more risky than the other. The original risk, having now been taken, is what economists call a sunk cost,(4) and rationally should play no further role in decision-making.(5)

This move weakens the case against moral realism, but does not defeat it. In order to make sense of the conflicting intuitions, the moral realist has to pile on additional premises, making the structure more and more complex. The scenarios above are not the only ones in which moral intuitions conflict. Should you tell the Nazis that Jews are hidden in your house or lie and say they aren’t? Should you allow abortion in order to protect a woman’s personal integrity or force her to have an unwanted baby to protect the fetus’ right to life? What about intuitions that Those People are evil and disgusting and should be exterminated and Our People are good and honorable and should dominate the earth? Depending on which group you are in, you may agree or disagree rather strongly. In each case, in order to make a coherent set of moral rules, you have to add more and more conditions, clauses and stipulations until the whole thing becomes unwieldy. You don’t have to do that if you take morality to be socially constructed; you just accept the inconsistencies because human beings are inconsistent creatures.

But in either case, whether you are a moral realist or not, you need to decide whether you will obey the moral rules revealed in your intuitions. How to decide that is a topic for another time.

Notes

(1) McMahan, “Moral Intuition,” p. 110. There is more to the process than this. In further steps you judge how well the universal rule fits with other such rules, go back and forth between the universal system and particular intuitions, and so forth. Here I simplify for the sake of argument.

(2) Wikipedia, “Moral realism.”

(3) See, for instance, the work of Jonathan Haidt and Daniel Kahneman, among others.

(4) Wikipedia, “Sunk costs.”

(5) Thanks to Professor David Sosa for making this point in the discussion.

References

Haidt, Jonathan. The Righteous Mind: Why Good People Are Divided by Politics and Religion. New York: Pantheon Books, 2012.

Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.

McMahan, Jeff. “Moral Intuition,” in Hugh LaFollette and Ingmar Persson, ed., Blackwell Guide to Ethical Theory, 2nd edition. Oxford: Blackwell, 2013. pp. 103-120.

Wikipedia. “Moral realism.” Online publication https://en.wikipedia.org/wiki/Moral_realism as of 28 February 2015.

Wikipedia. “Sunk costs.” Online publication http://en.wikipedia.org/wiki/Sunk_costs as of 2 March 2015.

From → Philosophy

6 Comments
  1. Gene Reshanov permalink

    As your fellow moral non-realist (however perhaps of a different shade) I was pleased to read this paper. Here is my 2 pennies worth of moral anti-realism: When people are trying to explain why something is good (or bad), they seldom grasp for the absolute good (or bad) of moral realism. They seldom say “It’s bad just because it’s Bad!” More likely they would try to reason from a perspective of a goal, a benefit or a desire. Leaving a child to burn inside a house would deprive the child of many years of rewarding life experiences, devastate child’s parents, etc., etc. It’s because of all those things it’s bad. One can also add “and because it is intrinsically bad in a moral realism sort of way”. But all the heavy lifting is already done elsewhere. The explanatory value of moral realism is suspect. Looks like it simply “steals the thunder”. Perhaps the absolute good quite literally is “good for nothing”.

    • Bill Meacham permalink

      > Leaving a child to burn inside a house would deprive the child of many years of rewarding life experiences, devastate child’s parents, etc., etc.

      Yes, abandoning the child would be bad for the child and its parents, and so forth. But why should someone not related to the child care? Why should that fact motivate one to help? (I do not disagree with you, just want further clarification.)

  2. Gene Reshanov permalink

    My writing style is condensed and often lacks the proper level of clarification. My apologies. I did not mean that the burning child “benefit calculation” of my little blurb was something that could really motivate the actor A to act. It was meant to be rather a next day explanation by a neighbor or a bystander B. The assumed setup is borrowed from your paper: A takes a risk, enters the fire, but only rescues the canary. That B would think it was bad I’m pretty sure, unless B is a sociopath. Most humans have very strong built-in feelings about child in danger situations. This does not come as a result of years of studying moral theory. I blame the usual suspects – our genes. Initially B would experience the un-analyzed, mostly instinctual scorn and anger towards A. But eventually, if asked, B would try to explain that reaction with something more rational – some kind of “benefit calculation”. Similarly, if anything it would be the un-analyzed emotional drive that would propel A to save the child. The absence of such drive in A arouses my suspicion of A’s sociopathy.

  3. Jeffrey Stukuls permalink

    Bill, what are the first principles upon which moral realism is built and are they consistent universally? (I mean do those principles apply beyond earth and humans?) Similarly, other than consistently repeatable experiences, how do people derive objective anything?

Trackbacks & Pingbacks

  1. The Anti-Realist Vegetarian | Philosophy for Real Life
  2. Bill Meacham : IDEAS | The anti-realist vegetarian | The Rag Blog

Leave a Reply

Note: XHTML is allowed. Your email address will never be published.

Subscribe to this comment feed via RSS