Skip to content

Reassessing Morality Part 2

by Bill Meacham on November 5th, 2018

(This is the second of a two-part series. In the first part I argued that morality is best conceived as a socially constructed reality.)

Part II: The Practice of Morality

When we recognize the socially constructed status of moral rules, responsibilities, obligations, prohibitions and the like we may find ourselves in a bit of a quandary: what to do with our new understanding. We understand that these things do not, in fact, apply universally. Now we have a choice: shall we take them to apply to us? We could, it seems, just ignore them, or ignore the ones we don’t like. But on what basis would it be rational to ignore them, and which ones?

Morality of some sort is necessary for human existence, for we cannot live without others of our kind. Zoologists classify the human species as “obligatorily gregarious.”[1] We must have ongoing and extensive contact with our fellows in order to survive and thrive, and morality governs those interactions. Suppose we wanted to devise a moral system for universal use. On what rational basis could we choose the rules of that system?

It is theoretically possible to opt out of socially constructed reality in a way that we cannot opt out of physical and mathematical/logical reality. If everybody by some magical trick stopped believing in physical reality, it would assert itself anyway. Even if we believed we could, we could not in fact walk through a tree. The same goes for mathematical/logical reality. The square root of nine would still be three even if nobody believed it. But if everyone stopped believing in money, we would have no money. We would have only bits of paper and metal. Similarly, it seems that we could opt out of morality, although doing so would be quite difficult.

It would be difficult because socially constructed reality is not merely fictional; it is, in its own way, real. Powerful evolutionary forces have instilled in us a sense of morality; we can’t just wish it away. Moral entities, and institutional facts in general, have a peculiar nature: they compel our behavior even though we, in a sense, just make them up. They compel our behavior because they seem really to be there. Approaching the issue not analytically but from the point of view of a member of society, sociologists Peter Berger and Thomas Luckmann observe that institutional facts are “experienced as possessing a reality of their own, a reality that confronts the individual as an external and coercive fact.” The social world appears to each of us “in a manner analogous to the reality of the natural world … as an objective world.”[2] The socially constructed entities may exist only because we believe they do, but we believe they exist because they seem really to be there. And, for most of us, they continue to seem really to be there even after we recognize their socially constructed nature, much as an optical illusion still fools us even when we know that it is only an illusion.

It is no small thing to be an institutional fact. To minimize the importance of morality by saying that it is “just” socially constructed is to overlook its emotional and motivational force on us. You can remove yourself from some institutions, e.g. marriage, but to do so you generally need to do it with other people. In other words, you create an alternative social institution. Some communes may try to do away with money, but most of them have to interact with the outside world, which forces them to deal with money anyway. And yet, recognizing the socially constructed nature of morality opens a possibility that was not apparent to us before.

Before we think about it much, we treat moral rules as constraining our conduct because we take them for granted. Their socially constructed character is invisible to us, largely because our acceptance of them is not something we do deliberately. We are taught the moral rules by parents, elders and educators in our society. Just as we take money, marriage, government, property and the myriad other institutional facts as real, so we take moral rules as objectively real. We question them only when cracks in the structure of our social reality confront us, as illustrated by moral conflicts such as those mentioned in Part I. And many of us don’t even question them then.

But for those who do, a sort of spell is broken. Intellectually, we do not see our world the same way as before; we are no longer taken in by moral reality. Once we understand that morality is socially constructed, we have the freedom to buy into it or not. We are able to choose, within the constraints of our emotional and social conditioning, which duties to obey. This freedom can seem like a burden because emotionally we still feel the force of these moral intuitions. We may know intellectually that it is not always wrong to steal things, but we still cringe a bit at the thought of doing so.

Philosophically, the question of whether to obey certain moral rules and not others or to include certain ones but not others in a deliberately constructed moral system cannot be answered in the context of the moral rules in question, because to do so would be already to assume the answer. We need some other way to resolve the issue. The resolution can come by recognizing a further fact about rules for behavior: they are not all socially constructed.

Moral rules are socially constructed, but other rules are not: prudential or practical rules variously called “maxims,” “policies,” “rules of thumb” and the like. We do not have to evaluate our actions in terms of moral rightness and wrongness; we can instead evaluate them in terms of the benefits or harms of their consequences. Moral rightness is socially constructed. The effects of our actions are not.

Morality and Prudence: Rightness and Goodness

Morality and prudence are two ways of thinking about ethics. (By “ethics” I mean the evaluation of conduct generally. Morality and prudence are subsets of ethics.) Prudence is the exercise of rationality to promote one’s own interests. To act prudently is to act wisely and rationally in order to achieve one’s goals. I want to use the term “prudence” is a slightly more extended sense, as one’s chosen goals might not always be in one’s actual interest.

To approach understanding the difference between morality and prudence, we can put the matter in linguistic terms. They are manifested as two clusters of concepts and language used to command or recommend specific actions or habits of character. We can call them rightness and goodness. The rightness paradigm recognizes that people live in groups that require organization and regulations, and frames values in terms of duty and conformance to rules. The goodness paradigm recognizes that people have desires and aspirations. It frames values in terms of what enables a being to achieve its ends. The right has to do with laws and rules; the good, with achievement of goals. Rightness and goodness are two alternative ways of organizing the whole field of ethics to carry out the tasks of evaluating conduct, both in particular cases and in general types.[3] Both judgments of rightness and wrongness and judgments of goodness and badness can apply to particular actions, to types of actions, and to the habits of conduct that make up a person’s character.

Morality exemplifies the rightness paradigm, which uses the terms “right” and “wrong” to evaluate conduct. Some synonyms for “right” are “proper,” “moral” and “permissible.” Some synonyms for “wrong” are “improper,” “immoral” and “impermissible.” Morality is not the only kind of rightness. Others are law, which consists of legal rules enforced by the threat of physical coercion, and etiquette, social rules enforced solely by praise and blame. It is obvious that law and etiquette are socially constructed. As we have seen, it is reasonable to believe that morality is too.

Prudence exemplifies the goodness paradigm. That paradigm uses the terms “good” and “bad” to evaluate not only conduct but also things, people, states of affairs, etc., as well as maxims or guidelines for conduct. Some synonyms for “good” are “helpful,” “nourishing,” “beneficial,” “useful” and “effective.” Some synonyms for “bad” are their opposites: “unhelpful,” “unhealthy,” “damaging,” “useless” and “ineffective.”

Something that benefits something or someone we call good for that thing or person. Such goodness may be instrumental or biological. Instrumentally, a hammer is good for pounding nails, and what is good for the hammer is what enables it to do so well. Biologically, air, water, and food are good for living beings.

To make sense, an instrumental usage requires reference to someone’s purpose or intention. Thus, a hammer is good for pounding nails, and you pound nails in order to build things such as furniture or housing. Your intention is to acquire the comfort and utility these things afford you. That is your goal, or end, and the good is what helps bring it about.

The biological usage does not require reference to purpose or intention. It is expressed in terms of health and well-being. That which nourishes a living thing is good for it. The good, in this sense, is that which enables a thing to function well, that is, to survive, thrive and reproduce. (The function of a living thing is, intrinsically, to survive and reproduce.[4] Living things also have functions external to themselves in their habitat or biosphere, such as to provide shelter or nutrients or other goods to other living things. Here I mean function in the intrinsic sense.)

The instrumental usage intersects the biological when we consider what is good for something that is itself good for a purpose or intention. For instance, keeping a hammer clean and sheltered from the elements is good for the hammer and enables the hammer to fulfil its instrumental function. In the instrumental sense as well, the good is that which enables a thing to function well.

If someone says something is good, you can always ask “Good for whom? Good for what and under what circumstances?” If someone says something is right, you can always ask “According to what rule?” The two domains of discourse really are separate, and it is not useful to mix them. Mixing them is a form of category error. That something has good effects does not make it right. That something is in accordance with a moral rule does not make it good.

(As a caveat, let me say that the advice to pay attention to language in this way is useful for the most part, but not universally. I am proposing a heuristic rule of thumb, a tactic for getting clarity, not an infallible recipe. Sometimes the term “good” is used in a moralistic way, and there are other meanings of the term “right,” as in the right answer to a question. We have to pay attention to what is being asserted, not just to the specific words. But by and large, the language used to assess conduct provides a good clue to the nature of the assessment.)

Rightness and goodness differ in social usage. Both moral rules and consideration of consequences are ways to say “should,” that is, ways to tell someone what he or she should do (or refrain from doing) or should have done, or to tell ourselves the same. Moral rules are called “deontic,” after a Greek word meaning duty. But the deontic is not the only type of “should.” Another type, expressed in terms of goodness, is prudential or practical. In deontic cases the “should” is a prescription or even a command. In the prudential/practical case it is a recommendation. The force of our prescription or recommendation depends on the category in which the “should” is presented.

In the case of a deontic moral “should” such as “Thou shalt not steal” (“should” being stated in its strongest form, “shall”), we feel justified in demanding that people obey the moral rule and blaming them if they don’t. The imperative has a sense of universality, that it applies to everyone.

(In the case of a legal “should” we may not only demand and blame, we may also punish the offender. In the case of a “should” of social etiquette, we may only blame, but generally not demand. Neither of these is universal; they apply only within a certain legal framework or in a certain segment of society.)

An example of a prudential/practical “should” is that for good health you should eat lots of vegetables. In this case we may not demand but may certainly advise adherence to such a “should.” And we may not blame or punish failure to comply but may say the choice is foolish. Unlike moral rules, prudential/practical advice is not always universal. In practice, it depends on context. Perhaps for a malnourished vegan eating lots of vegetables would not be good, and instead he or she should try some meat.

The importance of the distinction is this: Unlike moral rules, which are not subject to objective verification, the good is a feature of the natural world; it has to do with benefits, which are publicly observable. Prudential/practical judgments are objectively verifiable. We can do studies of the effects of diet on health, for instance, studies that provide factual evidence, so the recommendation to eat vegetables is not just someone’s opinion.

Recognizing the difference between goodness and rightness shows us a way out of the quandaries and discomfort that arise from recognizing that morality is socially constructed. And recognizing the difference also shows us a way out of intractable moral conflict. Instead of framing the issues in terms of rightness, we can frame them in terms of goodness. In other words, instead of commanding one to do the right thing, we can advise one to do what is good.

Two Questions

The advice to promote goodness raises two obvious questions: Goodness for whom? And why should we do what is good anyway? A full discussion is beyond the scope of this essay, but in general the answer to the first question is, goodness for as many people as possible, including the person acting, within the bounds of what is doable. The answer to the second question is that promoting goodness in this way benefits oneself.

The underlying principle, taken from the study of systems theory applied to ecosystems, is that an element of a system thrives when the system as a whole is healthy, and a system as a whole is healthy when its constituent elements thrive. Human beings are elements in a variety of systems, most notably systems of other people, or communities. If, in situations of conflict, we can find ways to benefit all concerned, then we ourselves will be benefited. If everyone is satisfied, then the solution will be likely to last, leading to further benefit for ourselves. Short-sighted egotistical selfishness is self-defeating. The advice to seek goodness for as many concerned as possible is a strategy based on enlightened self-interest.

By the way, the injunction to work for the greater good is not utilitarian. Utilitarianism is just another morality, defining what is right in a certain way, as the greatest good for the greatest number of people. We are not obliged to maximize the good in this way. Rather, doing so is just good advice for maximizing our own welfare.

I suppose one could ask why we should maximize our own welfare. Again, a full answer is beyond the scope of this essay, but in short we have no absolute obligation to do so. In fact, however, most people do want their own welfare. The imperative is hypothetical, not categorical: If you want to enhance your welfare, work for the good of all concerned. In the absence of a rationally compelling reason to obey any given moral rule, this principle is well suited to serve as ethical guidance.

Summary and Conclusion

We started this inquiry by noting that some conflicts, those based on differing moral intuitions, resist easy solution. People entrenched in their morality have no inclination to compromise with what they see as evil. Along the way we identified a quandary felt by thoughtful people who want to be rational: we do not recognize an obligation to act on moral intuitions when we perceive them as the social constructions that they are. But which moral intuitions shall we abide by, and which shall we discard? On what basis shall we make the decision? And we feel a further discomfort when we contemplate opting out of morality but find ourselves emotionally locked in.

The way out of these issues is to recognize that there is another whole set of criteria by which to judge actions, people, policies and so forth, a set variously called “prudential” or “practical” and referred to by the language of goodness, not rightness. We can decide to focus on goodness, on what works to promote welfare, instead of on what rigid rules insist.

To apply this advice to conflicts such as those listed above, we can ask the combatants to think about what would be beneficial for both parties. This requires some tact and diplomacy, of course, but it is worth a try. If both parties receive some benefit, a lasting peace is more likely than if one party wins and the other loses.

To apply this advice to personal moral quandaries, when we are trying to figure out what to do we can ask what good can come out of each choice, not what the right choice is.

To apply this to an approach to our conduct in general, to our character as persons, we can focus on what is beneficial as a general rule. We might want to be honest, for instance, not because of a commandment to avoid bearing false witness, but because doing so promotes harmony and good relations with others, which in turns benefit us.

Morality is certainly useful for maintaining social cohesion. Universality has its appeal, but to get a cross-cultural or universal set of moral values we would have to design it. We could more readily do so on the basis of what is good for people than on sectarian moral codes.

This essay began by listing some of the ill effects of moral conflict. Focusing on benefits for all concerned instead of on rigid morality ameliorates them. Working for the common good promotes flexibility, understanding, trust and honest communication. The first step is to frame issues in terms of goodness, not rightness. The second step is to seek the good for all concerned, not because it is our duty, but because doing so will benefit each of us in the long run.


[1] de Waal, Primates and Philosophers, p. 4.

[2] Berger and Luckmann, The Social Construction of Reality, pp. 76, 77.

[3] Edel, “Right and Good.”

[4] Foot, Natural Goodness, pp. 31-32.


Berger, Peter L. and Thomas Luckmann. The Social Construction of Reality: A Treatise in the Sociology of Knowledge. New York: Penguin Putnam Inc., 1966.

de Waal, Frans. Primates and Philosophers: How Morality Evolved. Princeton: Princeton University Press, 2006.

Edel Abraham, “Right and Good.” Dictionary of the History of Ideas. Ed.Philip P. Wiener. 1974 edition, Vol. IV, pp. 173-187. Online publication;;toc.depth=1;;brand=defaultas of 15 August 2017.

Foot, Philippa. Natural Goodness. Oxford: Oxford University Press, 2001.

From → Philosophy

  1. roland gibson permalink

    I’ve always liked ANW: Fineness of action is treasured in the nature of things.

  2. Hi Bill. I read your blog once in a while, but for better or worse seldom leave comments. But today I have a few.

    First, the proposition “murder is wrong” caught my eye. Murder means wrongful killing, thus wrongful killing is wrong. No surprise. We can generalize this into “wrongful acts are wrongful”. And many such acts would fit the pattern: stealing = wrongful acquisition of property, etc. So it appears that moral pronouncements of this form are trivially true analytic propositions. Yawn. But some kind of prohibition is indeed being asserted underneath the empty “a=a”. In the case of murder, it can be formulated as “killing humans w/o proper authorization is forbidden”. So, perhaps the problem here is a technical one — the unfortunate “a=a” form obscures the actual but unspoken prohibition.

    The more serious problem I see is in your hopeful conclusion that our shifting to goodness paradigm is going to ameliorate the ills created by the adherence to rigid “sectarian moral codes”. I suspect that you see a certain goodness ideal as more universally shared than it really is. Your vision of human flourishing and a good life is probably an American middle-class version of Aristotelian eudaemonia: good health, intellectual interests, rewarding professional life, financial security, civilized behavior, civic engagement. I’m in your camp in this regard, but not everyone is. To many people, the elements of your eudaemonia are fancy words with no meaning. To the Taliban, goodness looks like a full burqa; to white nationalists, it is the hegemony of the Aryan race with everyone else knowing their place, or else. To many people, goodness is your badness, Bill. To some, it is quite literally so: if you, the privileged, arugula eating, philosophy writing “city slicker” is hurting — that’s a good thing!

    • Bill Meacham permalink

      Thanks for your comments. Here are a couple of replies.

      > wrongful killing is wrong … killing humans w/o proper authorization is forbidden

      Good point. I’ll try to think of a way to rephrase or clarify assertions like that.

      But “forbidden” can mean illegal or immoral. The two are not coextensive. Some illegal things are hardly immoral; and some immoral things are, sadly, quite legal. (Using an American middle-class rough idea of what is moral and immoral.) I could say that killing humans without proper authorization is immoral. Or I could say, using another definition of “murder,” that to kill or slaughter someone inhumanely or barbarously is immoral. ( definition 5). In either case, my writing would benefit from more clarity.

      > I suspect that you see a certain goodness ideal as more universally shared than it really is.

      Yes. The dicey part of my goodness ethic is how to educate others so they will understand what is actually good for them. I submit that both the Taliban and the white supremacists would in fact be better off if they abandoned their hateful ways. But they don’t see that, and the question is how to persuade them differently. I suspect the first step is respectful listening, not strident arguing back.

  3. I read your articles on “Reassessing Morality.” In the comments of part 2 you say:

    “The dicey part of my goodness ethic is how to educate others so they will understand what is actually good for them. I submit that both the Taliban and the white supremacists would in fact be better off if they abandoned their hateful ways. But they don’t see that, and the question is how to persuade them differently. I suspect the first step is respectful listening, not strident arguing back.”

    This immediately reminded me of a lecture I had in a comparative religion class at Tulane many years ago by professor Whittemore – brilliant man. He was discussing the phenomenon of fundamentalism. He said that all fundamentalism was consistent across religions. He explained that all fundamentalist thinking is based on a core belief that if the person does everything “right” all will be well. He further explained that the yardstick used was whatever text the religion held to be holy and that they were always literalists. He went on to point out that because of this, any questioning or modifications or interpretations of the text was typically forbidden. It is the “rule book.” He said that given this situation, when you confront a fundamentalist with an alternative view, they respond in one of two ways. They completely disregard you (because they have no respect for your opinion) or they want to kill you as a heretic. He said that in the first scenario, because they do not respect your opinion, or point of view, they do not consider you a threat. In the second case, you are undermining their entire world view, because IF the book is wrong, then their whole premise for salvation is wrong and they cannot face this possibility. They are entirely ego invested in this idea. Because of this, he believed all fundamentalism was based purely in fear. He pointed out that historically, the most horrible wars have been fought between people of very close, but slightly different faiths. He pointed out that there is great animosity between Jews, Christians and Muslims and also between Catholics and Protestants. He said they typically didn’t wage religious war on Buddhists, Confucianists or Taoists, for example, because they didn’t respect their views enough to be threatened by them – too different!

    The other thing that he pointed out was that fundamentalist religions were similar to cults in that they strongly sanction their members for free thinking. It is not promoted and not tolerated, typically. Questioning is not allowed. Because of this pattern, they only tolerate people with exactly the same views – all others are heretics and dangerous. If you question them, you are questioning the text and God and everybody else in the community. The text is the ONLY valid source of answers and thus is fixed forever. This is a real problem. It allows no modification, no change or tolerance.

    So, if what he said was correct, and I believe it basically is, listening will not help you with a fundamentalist unless you end up agreeing with them, unfortunately. Arguing with them, would, however, most probably be worse.

    Freud (I believe) said that the most dangerous thing you can do is to destroy a person’s core beliefs. It is extremely destabilizing. This is why this problem is so difficult to fix. If you promote non-thinking and intolerance for any other viewpoints, you have created a system that cannot safely change and you are training your adherents to be incapable of intelligently thinking through a change even if they wanted to. Why learn to think when it is not allowed? In reality, many people embrace fundamentalism BECAUSE they do not have to think (much). If they follow the rules, God will bless them. If they do not – hellfire and damnation are assured. It is a very simple belief system at core, though the complexity of the rule book varies from one group to another.

    It is also one of the reasons I am so impressed with the Sufis. Hazrat Inayat Khan said emphatically that there was no dogma in Sufism. The Sufis promote thinking and do not ever praise NOT thinking! Almost all of the Sufi stories I have read promote thinking and wisdom. They never glorify inflexible or stupid thinking or behavior – quite the opposite. That is the antidote to the fundamentalist mentality. The fundamentalist Muslims know this and because of that they kill the Sufis mercilessly. Unfortunately, once the pattern of fundamentalism is set, I do not see much chance of changing it. It is a puzzle we desperately need to solve, however.

Trackbacks & Pingbacks

  1. Reassessing Morality | Philosophy for Real Life

Leave a Reply

Note: XHTML is allowed. Your email address will never be published.

Subscribe to this comment feed via RSS