Skip to content
Nov 14 19

Neanderthals R Us

by Bill Meacham

Our closest living genetic relatives may be chimps and bonobos, but we have had even closer ones. Humans diverged from the ancestral line of primates to become a separate species about 5.5 million years ago. At that time we went our way, and the ancestors of chimps and bonobos went theirs. But those past 5.5 million years have seen a great variety of human-like creature of which we, Homo Sapiens, are only the latest. They have strange Latin names: ardipithecus, australopithecus, homo naledi, homo erectus and more. The most recent before us is homo neanderthalensis, the Neanderthals.(1)

Neanderthal face
Neanderthal man, Natural History Museum, London

The name “Neanderthal” (or “Neandertal”; the “th” is pronounced simply as “t”) comes from the place where this species’ skeletons were first discovered, the Neander valley (“Tal” in German), near Düsseldorf in western Germany. The Neanderthals show many similarities to us Homo Sapiens, but they died out about 40,000 years ago. We, obviously, have flourished. The question is, what is it about Sapiens that gave us the advantage?

We know quite a bit about this extinct species from archeology (“stones and bones”), of course, but also from DNA reconstruction and the new field of computational neuroanatomy. Sometime between 630,000 and 520,000 years ago the shared ancestors of Neanderthals and Sapiens diverged and embarked on separate evolutionary paths. Those who spread to the Middle East, Europe and western Asia eventually evolved into Neanderthals, whereas those in Africa gave rise to us modern humans, whom we have immodestly named Homo Sapiens, Latin for “wise human.”

Neanderthal range
Where Neanderthal remains have been found
Neanderthal man
Neanderthal man
Neanderthal Museum, Mettmann, Germany

At the time, during the Pleistocene epoch, the world was much colder; it was an ice age, although there were about a dozen warmer periods within it. In Africa, what is now the Sahara desert was periodically a moist and fecund savanna. Europe, bounded by glaciers in the north, was heavily forested. Humans in Africa became slender and hairless, adapted to remain cool in the heat and to be able to run long distances over grassy plains in pursuit of game. Humans in Europe became short and stocky, adapted to retaining body heat. Their upper leg was longer than that of Sapiens in proportion to the lower, probably an adaptation to climbing hills. Rather than running long distances to hunt, they sprinted, pursuing their prey in short bursts of speed. Their skulls were flatter and more elongated than ours, with protruding faces, prominent brows, large noses, and receding chins. Perhaps European fairy tales of gnomes and trolls are ancestral memories of human-like others who were shorter, lumpier, more deformed and—to modern eyes—uglier than we. Neanderthals lived in small and isolated populations of no more than about 3,000 individuals per region. Harsh climate and scarcity of resources likely contributed to keeping their numbers low.

Sapiens and Neanderthal skulls
Skulls of Sapiens (left) and Neanderthal (right)

Neanderthals were similar enough to Sapiens in biology and behavior that they interbred with us in several times and places between 100,000 and 40,000 years ago. About two percent of the DNA of modern humans of European descent come from Neanderthals. Asians and Melanesians have slightly more. Contemporary Africans have none. These genes seem to have contributed to lighter skin, lower blood levels of LDL (“bad”) cholesterol (thus reduced risk of heart disease) and higher levels of vitamin D (helpful in a gloomy climate). All of these would have contributed to genetic fitness. We should be grateful to our Neanderthal ancestors for helping the survival of our lineage.

Neanderthals had some of the same kind of cultural artifacts that Sapiens had: stone tools, ornaments made of bone and other materials and the like. Some of them buried their dead and may have painted designs on cave walls. But Neanderthal tool making changed little over hundreds of thousands of years; they were well adapted to their environment and had little impetus to change.

It is clear that Sapiens were smarter. Sapiens’ art, tools and cultural artifacts far outstrip those of the Neanderthals. Even if some cave paintings are Neanderthal—and that thesis is contentious—the famous paintings of lifelike animals in the caves of Chauvet, Altamira and Lascaux, clearly made by Sapiens, far transcend them. Neanderthal brains were about the same size as ours, but were constructed differently internally. They had more capacity devoted to vision and body control, with less left over for social interactions and complex cognition. We became smarter because of our environment. Sapiens evolved in the ancestral savanna for several hundred thousand years under changing climatic conditions while Neanderthals stagnated in Europe and western Asia. Sapiens were in constant contact with people from other tribes and became smarter because minds evolve by bumping up against other minds. Neanderthals, living in a harsher climate with less social contact, had less selection pressure to increase intelligence.

There are numerous conjectures about why Sapiens endured and Neanderthals went extinct. Perhaps our metabolism was more efficient. Perhaps we were less susceptible to Neanderthal diseases than they were to ours. Perhaps it was climate change: our physiology was better suited to the spreading grasslands of Europe as the forests receded, depriving the Neanderthals of their native habitat. Perhaps it was simply because we were smarter and could make better tools.

Yuval Harari plausibly speculates that Sapiens could outcompete their rival species because they had a greater capacity for communal social reality. Socially constructed realities such as shared mythologies and religion enabled Sapiens to coordinate the activities of a great many people, uniting bands of Sapiens more efficiently than relatively isolated Neanderthal tribes. Artifacts dispersed over many hundred miles indicate extensive Sapiens trading networks.

Such competition may well have been violent. It is not hard to imagine tribes of Sapiens warring against people who did not even look quite human, because we have a long history of warring against each other, who do. Sapiens bands would aggressively move in on the Neanderthals’ territory and chase them out or kill them in order to capture their resources. It would be a totally primate thing to do.

But competition didn’t have to be violent. Superior hunting techniques, especially in an environment that was becoming dryer and less forested, could have enabled Sapiens to capture more game, starving the Neanderthals out. The Neanderthals, facing food shortages, would have had to move away to find sustenance. After a while there was no place else to move to. It’s a sad story, really; it’s easy to feel a bit sorry for them.

Probably multiple causes contributed to Neanderthal decline, but our enhanced capacity to construct social realities stands out. That is quite a cognitive achievement, and understanding it is a key part of understanding who and what we are.

Philosopher John Searle calls socially constructed realities “institutional facts.”(2) They are facts that exist only by virtue of collective agreement or acceptance, and there are quite a number of them. Searle mentions money, property, marriages, governments, tools, restaurants, schools and many others. They exist only because we believe them to exist.

We don’t much notice socially constructed realities because they are just part of the background. We take for granted marriage, bankruptcy, nations, legal codes and lots of other things that don’t have physical existence (although they may well affect or be instantiated by physical things). They don’t have physical existence, but they are quite real in that they have real effects on people. Try telling the judge that you don’t have to obey the law because it doesn’t exist!

One of most prevalent socially constructed realities is morality. The details of what conduct is prohibited, allowed and required by the moral code may vary from culture to culture, but all cultures have sets of rules, whether stated explicitly or not, that specify how people are to act. And people in every culture—which is to say all people, as we never find humans in isolation—have internalized the moral code of their culture and have a conscience, a sense of right and wrong.

Because moral rules are socially constructed, they are subject to change if enough people agree. Over the years, moral codes have indeed changed for the better; we no longer tolerate slavery, for instance, and are becoming more accepting of sexual preferences that used to be thought depraved and sinful. As we learn to take the point of view of others, we promote kindness and compassion, which benefit all of us.

If we came across a band of Neanderthals today, hidden away in some remote valley, I hope we would care for them as much as we do for other endangered species. They were no less human than we, only different.


(1) There was a similar species in eastern Asia, the Denisovans, named after a cave in Siberia where their remains were first found. Their physique and lifestyle were probably comparable to Neanderthals. Also like Neanderthals, they went extinct shortly after Sapiens showed up in their territory. My comments about Sapiens’ cognitive superiority to Neanderthals apply also to Denisovans.

(2) Searle, The Construction of Social Reality, pp. 2, 28, 43-45.


Alex, Bridget. “Neanderthal Brains: Bigger, Not Necessarily Better.” Online publication as of 17 October 2019.

Akst, Jef. “Infographic: History of Ancient Hominin Interbreeding.” Online publication–history-of-ancient-hominin-interbreeding-66319
as of 30 September 2019.

Harari, Yuval Noah. Sapiens: A Brief History of Humankind. New York, Harper-Collins, 2015.

Hendry, Lisa. “Who were the Neanderthals?” Online publication as of 13 October 2019.

O’Shea-Jhu, Dennis. “Short legs let Neanderthals climb mountains.” Online publication as of 6 November 2010.

Scientific American various authors. “Evolution: The Human Saga.” Scientific American, September 2014 Volume 311, Number 3.

Searle, John R. The Construction of Social Reality. New York: The Free Press, 1995.

Stromberg, Joseph. “Science Shows Why You’re Smarter Than a Neanderthal.” Online publication
as of 13 October 2019.

Tatersall, Ian. “Homo sapiens.” In Britannica Online Encyclopedia. Online publication as of 4 November 2019.

Than-Stanford, Ker. “Did disease buy time before Neanderthal extinction?” Online publication as of 14 November 2019.

Touropia. “10 Prehistoric Cave Paintings.” Online publication as of 12 November 2019.

Tuttle, Russell Howard. “Human evolution.” In Britannica Online Encyclopedia. Online publication as of 4 November 2019.

Viegas, Jen. “Brain Reconstructions Suggest Reasons for the Decline of Neanderthals.” Online publication as of
13 October 2019.

Viegas, Jen. “Neanderthal DNA Influences the Looks and Behavior of Modern Humans.” Online publication as of 9 November 2019.

Wikipedia. “Neanderthal.” Online publication as of 6 November 2019.

Wikipedia. “Neanderthal behavior.” Online publication as of 6 November 2019.

Wikipedia. “Neanderthal extinction.” Online publication as of 6 November 2019.

Sep 9 19

More About Function

by Bill Meacham

In my book and other writings I have appealed to the notion of function to explain how we can achieve a degree of satisfaction or fulfillment in our lives. Taking “function” to mean what we are good at or good for, I claim that doing our function well is key to our flourishing and is accompanied by a feeling of well-being. On a personal, idiosyncratic level, if you are good at sports but not math, you will be better off pursuing a career, or at least a hobby, in the former rather than the latter. On a generic level applicable to all humans, if we can figure out what human beings in general are good for or good at, we can have a happy life by developing and exercising those abilities. As Aristotle says,

[A clearer account of happiness] might perhaps be given, if we could first ascertain the function of man. For just as for a flute-player, a sculptor, or an artist, and, in general, for all things that have a function or activity, the good and the ‘well’ is thought to reside in the function, so would it seem to be for man, if he has a function.(1)

In this essay I examine more carefully just what the notion of function entails, summarizing some of the recent philosophical research on the topic.(2) The Greek word ergon in Plato and Aristotle, translated as “function” or “work”, means what something does or what it is there for(3), what good it does.(4) Modern analysis gives us more detail. Just as our understanding of physics has gone well beyond Aristotle, so has our understanding of what function really is.

First, note that there are two kinds of function, biological and instrumental. Biological functions are things such as these: the function of the heart is to pump blood; the function of the eye is to see; legs and feet function to enable an organism to stand and move around; the function of a polar bear’s white fur is to provide warmth and camouflage in snow. In all these cases the function of the part contributes to the ongoing life and health of a living being.

Instrumental functions pertain to artifacts and involve a deliberate purpose. For instance, the function of a telephone is to enable people to talk to each other over long distances. The purpose of talking could be many things, such as making an appointment or finding out information or just chatting. The purpose of doing those things is to contribute to the ongoing life of a human being.

In both cases we end up with a contribution to life, but in the biological case the contribution is direct and need not involve deliberate purpose, whereas in the instrumental case the contribution is indirect and does involve deliberate purpose. The modern analysis attempts to find parallels between these two kinds of function.

In both cases a thing’s function is a subset of what it does. A heart does a number of things: it pumps blood, it makes a sort of thumping noise, it makes squiggly lines on an electrocardiogram. Why do we say that its function is to pump blood, but not to make noise? Because pumping blood contributes to the health of the animal, but making noise is just a byproduct. If there were a silent organ that pumped blood, it would be a heart; but if there were a noisy organ that sort of looked like a heart but did not pump blood, it would not.

Similarly, a telephone does more than one thing: it enables people to talk to each other over a distance, it holds down papers when placed on top of them, it annoys people when it rings in a library, and so forth. Why do we say that its function is to enable communication and not to make noise? Because enabling long-distance communication is what the artifact is designed to do. A silent artifact that enabled us to talk to each other over a distance would count as a telephone, but a thing that rings but doesn’t connect distant people for talking would not.

As you can see, the contribution of an organ to the health of its host animal is analogous to the contribution of an artifact to the purpose of the structure in which it is placed. Both are embedded in larger systems. A heart is one organ among many in an animal; a single telephone is one device among many in a communications network. The heart, when it functions well, keeps the animal alive; the telephone, when it works, enables the communications network to fulfill the purpose for which it was designed. In both cases the entity in question is good for something within a larger context.

But what something is good for is not in itself enough to call it a function. A heart is a good source of nutrients for someone (or something) who eats it, but that’s not the function it evolved to serve. If we just consider hearts in the abstract, we would not say that their function is to provide trace minerals and B vitamins to those who consume them, but to pump blood. A telephone might be good for acting as a paperweight, but that is not its function, or at least not the function it was designed for. If we just consider telephones in the abstract, we would not say that their function is to be paperweights, but to be communication devices.

In both cases, how something came to be is part of what we mean by “function.” The heart came to do what it does by means of evolution through natural selection. The telephone came to do what it does by means of deliberate design and manufacture. We can say that hearts exist because they pump blood, and they pump blood because they evolved to do so. (More precisely: because doing so caused the proliferation of ancestors of animals containing hearts.) We can say that telephones exist because they enable long-distance communication by voice, and they enable such communication because someone designed them to do so.

Philosophers have sparred about whether an organ can be said to have a function because it contributes to the well-being of a present-time organism or only because it contributed to the reproductive success of that organism’s ancestors. I think the distinction is a bit trivial because the present-time organism has the potential to be an ancestor of future organisms, and the traits that contribute to its well-being also contributed to that of its ancestors. In either case, contribution to the success of the larger system of which it is part is crucial. The organ replicates through generations because it contributes to the well-being—in evolutionary terms, the fitness—of the organism of which it is a part.

To sum up the discussion so far, the concept of biological function is exactly parallel to that of instrumental function.

Here is the biological account:

  • An element has a function if it contributes in some way to the ongoing health, operation or maintenance of the organism of which it is a part; and
  • It came about through a process of natural selection such that its operation gave a selectional advantage to the organism’s ancestors.

Here is the instrumental account:

  • An element has a function if it contributes in some way to the ongoing operation or maintenance of the artifact or system of artifacts of which it is a part; and
  • It came about by deliberate design.

Now, to return to the original question, we can ask what the functions of the human being are. I focus on the biological account because I don’t consider humans to be artifacts (although some theists might disagree). The modern concept of function goes beyond the ancient Greek idea of what work (ergon) something does, and now the reason why good functioning leads to well-being is clearer. If an organ functions well, it contributes to the functioning of the whole, which in turn nourishes the organ. But does it make sense to consider humans to be organs in some larger whole?

That’s a profound question. Before we attempt an answer, let’s remember that humans are, obviously, living beings. In order to have any effect at all on whatever larger system we might find ourselves in, we have to be alive. Aristotle distinguishes three ways beings can be alive. He calls ways of being alive “soul’ (psyche). Aristotle contrasts the human way of being alive with two others, that of plants and that of non-human animals. Plants, animals and humans are all alive. All have soul; not a soul, but soul in general; we can call it soulness. Soulness in plants enables them to take in nutrients, grow and reproduce. Soulness in animals enables them to do those things and, in addition, to perceive their world and in most cases move around. The soulness of humans is that humans do all that plants and animals do and even more. Humans have in addition, according to Aristotle, the power to think rationally.(5)

The connection between functioning well and well-being is clear. A plant that absorbs nutrients well does better than one that absorbs nutrients poorly; that is, it has a better chance of surviving, thriving and reproducing. An animal that perceives its world and gets around in it well has a better chance of surviving, thriving and reproducing than one that does those things poorly. And human beings who think well have a better chance of surviving and thriving (for humans, reproducing is optional) than those who think poorly.

But humans do lots of things besides think. We can ride bicycles, play Frisbee, watch TV, argue with each other and do many other things. Which ones shall we look at to find out how to lead a fulfilling life? Plato says that a thing’s function is what only it does or what it does better than anything else.(6) Even so, there are quite a few things that humans do that other animals don’t do at all or don’t do as well. An Internet search for what makes humans special yields these and more:

  • We think symbolically and abstractly about objects, principles, and ideas that are not physically present.
  • We use language to communicate complex concepts and to coordinate social roles and group activities.
  • We have rich culture. We can transmit and replicate ideas, symbols and practices very quickly through writing, speech, gestures and rituals.
  • We cooperate in large, well-organized groups and employ a complex morality that relies on reputation and punishment.
  • We can understand what others are thinking and mentally take their point of view. We can intuit what another person is thinking so that we can both work together toward a shared goal.
  • We make tools of far greater complexity than the simple ones that apes, dolphins, birds and other animals use.
  • We create art and music.
  • We can pay attention to ourselves and think about our own thinking. This capacity is what I call second-order thinking, also known as meta-cognition and self-awareness. It is the foundation of our freedom to make choices and form our own destiny.

These are all functions in Plato’s sense; they are unique capacities that humans have. Arguably, doing any of them well enhances our ability to flourish and enjoy a sense of well-being. But what of the more recent understanding of function. Are humans anything like organs existing in a more comprehensive organism? If so, in what way do we contribute to the ongoing health, operation or maintenance of that organism?

Perhaps “organism” is too grandiose a term, being more metaphorical than literal, but it is undeniable that we exist and function within larger systems. We are embedded in nature; our role as creatures within a bioregion is quite analogous to that of organs within an organism. In addition, we are embedded in social systems: families, tribes, neighborhoods, cities, nations, clubs, religious assemblies, professional organizations, economic enterprises, political parties, sports teams and many more. Being with others is not optional for us; we must have ongoing and extensive contact with our fellows in order to survive and thrive.

Within these systems, our role is unique. Unlike nonhuman animals, we can choose our function. That is, we can choose whether and in what way our effects on the systems in which we are embedded enhance those systems. We can impose instrumental function on our biological and social foundations.

For example, in the natural realm a skillful homesteader can design and maintain a local ecosystem to be healthy and provide nourishment and benefit to its caretaker and to the plants and animals within it. My Permaculture teacher says the functions of humans (Permaculture calls them “services”) are to plan, to design and to haul around large amounts of stuff. But if the homesteader is not skillful, the ecosystem is likely to decline. In the larger ecosystem of our entire planet, we can collectively choose whether to take action to avert climate disaster or to stand back and let it get worse.

In social settings, there are numerous ways we can work for the greater good of our group or community and thereby increase our own well-being. We can volunteer to help out, we can take on leadership, we can be loving and kind to our neighbors, we can advocate for good policies, we can provide useful services, we can just smile and be friendly. Or not; it is up to us.

Plato says that the human soul’s function is deliberating, managing and ruling.(7) In other words, our function, if we choose to accept it, is to be stewards of our natural and social environment. But we can also ignore that opportunity. The potential for exercising a useful function is there, but it is up to us whether to actualize it. We can use our vast intelligence to function as stewards and take charge of the world in which we find ourselves situated. If we choose to exercise that function well, we flourish; if not, we don’t. The choice is ours.


(1) Aristotle, Nichomachean Ethics, I.7, 1097b 22-29.

(2) See Buller, Cummins, Millikan, Neander, Sober and Wright. In the years since Wright’s influential analysis in 1973 something approaching a consensus has emerged among analytic philosophers as to the meaning of the term “function.” As philosophers do, they have quibbled with each other about minor points, but the broad outline is clear. I am grateful to Professors Sinan Dogramaci and Ray Buchanan of the University of Texas at Austin for allowing me to sit in on their 2018 seminar on telos, function and explanation, where I was introduced to these thinkers.

(3) Wright, “Functions,” p. 146.

(4) Foot, Natural Goodness, p. 32.

(5) Aristotle, On The Soul, 2-3, 413a 20 – 415a 10.

(6) Plato, The Republic, 353a.

(7) Plato, The Republic, 353d.


Aristotle. Nichomachean Ethics. Tr. W.D. Ross. Introduction to Aristotle, Ed. Richard McKeon. New York: Random House Modern Library, 1947. Available online at

Aristotle. On the Soul. Tr. J.A. Smith. Introduction to Aristotle, Ed. Richard McKeon. New York: Random House Modern Library, 1947. Available online at

Buller, David J. “Introduction: Natural Teleology.” In Buller, David J., ed. Function, Selection and Design. Albany, New York: SUNY Press, 1999, pp. 1-27.

Cummins, Robert. “Functional Analysis.” The Journal of Philosophy, Vol. 72, No. 20. (Nov. 20, 1975), pp. 741-765. Online publication as of 17 August 2007.

Foot, Phillippa. Natural Goodness. Oxford: Oxford University Press, 2001.

Millikan, Ruth Garrett. “Proper Functions.” In Buller, David J., ed. Function, Selection and Design. Albany, New York: SUNY Press, 1999, pp. 85-95.

Neander, Karen. “The teleological notion of ‘function'”. Australasian Journal of Philosophy, Volume 69 Number 4, December 1991, pp. 454-468. Online publication as of 12 January 2018.

Plato. The Republic. Tr. Paul Shorey. The Collected Dialogues of Plato. Ed. Edith Hamilton and Huntington Cairns. New York: Pantheon Books, 1963.

Sober, Elliott. Philosophy of Biology, Second Edition. Boulder Colorado: Westview Press, 2000, pp. 86-88.

Wright, Larry. “Functions.” The Philosophical Review, Vol. 82, No. 2 (Apr., 1973), pp. 139-168. Online publication as of 22 May 2012.

Apr 1 19


by Bill Meacham

One of the main themes of my book How To Be An Excellent Human is that our happiness and fulfillment depend on how well we exercise our uniquely human abilities, the chief of which is second-order thinking, that is, thinking about our own thinking. It is variously known as self-awareness, self-knowledge, metacognition, mindfulness and emotional intelligence. A recent article on how to cope with procrastination nicely illustrates how we can use this ability effectively.

Procrastination, putting off something you know needs to be done in favor of doing something less important, is an example of what the ancient Greeks called akrasia, often translated as weakness of will. Literally it means lack of command, specifically lack of command over yourself. You suffer from akrasia when you know what’s good for you but do something else instead.(1)

The ancients puzzled over this phenomenon. Socrates thought it was merely a product of ignorance; if you do something harmful to you, you don’t really know what is good for you.(2) Aristotle had a more nuanced view, recognizing that people’s rational judgment can be overcome by emotion. You know what’s good for you, but your emotions influence you to do something else instead.(3)

And that is exactly what recent research says about procrastination. According to journalist Charlotte Lieberman, citing research by psychology professor Fuschia Sirois, procrastination is not due to a lack of time-management skills, but to lack of mastery over your emotions. For whatever reason, you find the prospect of the task before you distasteful. Perhaps it’s boring; perhaps it’s inherently stinky, dirty or in some other way disagreeable; perhaps it triggers insecurities or fear of failure. In any case, you’d rather do something else. Finding something else to do alleviates those unpleasant feelings, and the immediate relief acts as a reinforcer, making it harder to avoid procrastination in the future.

The momentary relief we feel when procrastinating is actually what makes the cycle especially vicious. In the immediate present, putting off a task provides relief — “you’ve been rewarded for procrastinating,” Dr. Sirois says. And we know from basic behaviorism that when we’re rewarded for something, we tend to do it again. This is precisely why procrastination tends not to be a one-off behavior, but a cycle, one that easily becomes a chronic habit.(4)

So what can we do about it? Sheer will power may work for some, but probably not for most of us. Social psychologist Jonathan Haidt says human nature is two-fold. Each of us is like a rider on an elephant. The rider part is how we like to think of ourselves, as rational beings in charge of our actions. The elephant part is the mass of instinctual desires and reactions that really, in a great many cases, determines what we do.(5) Imagine trying to stop a stampeding elephant by standing in front of it and waving your hands and shouting at it. That’s how ineffective our will power is during moments of procrastination. A more effective approach is to ride the elephant and gently nudge it in the direction you want to go. The trick as rider is to outwit the elephant. There are a number of ways to do so.

Lieberman suggests things you can do when faced with the temptation to procrastinate:

  • Notice and pay attention to what is going on in the moment when you feel tempted to procrastinate. How does your body feel? Where is there tension? What’s going on with your breath? Is is rhythmic or irregular? Is it slow and deep or fast and shallow? Putting your attention on these things interrupts the compulsion to do something other than what you know you should.
  • When tempted to procrastinate, consider, purely as an abstract exercise, what your next action would be if you were to undertake the task you want to avoid. Would you get out the vacuum cleaner? Would you put a date at the top of the document you need to write? Then do that little action, and start some momentum in the desired direction.

She also suggests things you can do ahead of time, when you are not faced with the task you are typically tempted to avoid and are not in the grip of the urge to procrastinate:

  • Make your temptations inconvenient. Put obstacles in the way of the things you typically do instead of what you really want to. For instance, if you compulsively check social media, delete such apps from your phone.
  • Make it as easy as possible to do what you rationally decide you want to. If you want to go to the gym before work but you’re not a morning person, sleep in your exercise clothes.

Education consultant Christopher Rim also recommends mindfulness: “If procrastination is spurred on by a knee-jerk reaction to a negative emotion, the first intervention has to be noticing … those negative emotions.”(6) And he suggests cultivating habits that promote getting things done:

  • Practice mindfulness on a regular basis, perhaps as a daily meditation, so you can more easily notice what goes on when you are tempted to procrastinate.
  • Learn to enjoy the feeling of accomplishment more than just being busy. Aim for getting things done rather than just working a lot.
  • “Touch it once.” When a text message or an email arrives or an idea comes up, deal with it immediately instead of putting it off. Do it right away, say “no,” delegate it, schedule it or ask for input; but don’t just put it aside.
  • Avoid perfectionism. Getting something done sooner is better than waiting until later to get it perfect. It is easier to deal with a first draft than an empty piece of paper or a blank word processing document.

All of these tricks and techniques—and there are more; this is not a complete list—are ways of exercising our capacity for second-order thinking.

Humans have far greater intelligence than other animals. We make plans, imagine states of affairs not immediately present and target our behavior to reach envisaged goals. When this intelligence is directed at affairs in the world, it is first-order thinking. It can range from the very simple, such as jotting down a grocery list, to the very complex, such as planning a multi-year project. Not only do we make plans, we execute them and accomplish our goals. We make corrections along the way to overcome obstacles and take into account changing circumstances. When this kind of observation, planning and execution is directed at ourselves, it is second-order thinking, also known as self-knowledge, self-awareness, self-reflection (as one examines one’s reflected image in a mirror), and metacognition.

We can turn our attention to ourselves in two ways: We can observe ourselves in action, in the moment; and we can think about ourselves before or after we do something. The first is the mindfulness recommended by Lieberman and Rim. The second is the habits and strategies they and others prescribe.

Second-order thinking is the peculiarly human virtue. By “virtue” I do not mean some kind of high moral standard, but what the Greeks called areté, or excellence at being effective in the world. For example, an excellent teacher imparts knowledge accurately and thoroughly, and an excellent student learns quickly and retains what he or she has learned.

But what do you do to be an excellent human as such, not just as occupying a particular social role? You use second-order thinking to improve your ability to master life. Second-order thinking enables us to hone and improve our first-order thinking and thereby accomplish our goals more effectively. Even better, it enables us to examine our goals themselves to see if they are really worth pursuing, but that’s a topic for another time. For now, take this advice about dealing with procrastination and see how you can apply it to other areas of your life. Your reward, I expect, will be a greater depth of satisfaction and fulfillment.


(1) Wikipedia, “Akrasia.”

(2) Plato, Protagoras, 358b-d.

(3) Kraut, “Aristotle’s Ethics.”

(4) Lieberman, “Why You Procrastinate.”

(5) Haidt, The Happiness Hypothesis, p. 4.

(6) Rim, “How To Defeat Procrastination.”


Haidt, Jonathan. The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom. New York: Basic Books, 2006.

Kraut, Richard. “Aristotle’s Ethics.” The Stanford Encyclopedia of Philosophy (Summer 2018 Edition), Edward N. Zalta (ed.) Online publication as of 30 March 2019.

Lieberman, Charlotte. “Why You Procrastinate (It Has Nothing to Do With Self-Control).” New York Times, 24 March 2019, page B8. Online publication as of 29 March 2019.

Plato. Protagoras. Collected Dialogues, pp. 308-352. Ed. Hamilton, Edith and Huntington Cairns. New York: Pantheon Books, Bollingen Foundation, 1963.

Rim, Christopher. “How To Defeat Procrastination With The Psychology Of Emotional Intelligence.” Online publication as of 29 March 2019.

Wikipedia. “Akrasia.” Online publication as of 30 March 2019.

Feb 28 19

Fearing Death

by Bill Meacham

Is there any reason to fear death? I don’t mean the process of dying. There are plenty of ways to die that would be extremely unpleasant, and it is reasonable to try to avoid them. I mean the state of being dead after the death of your body. Certainly, many people do fear being dead, but the philosophical question is whether it is rational to do so.

Rational arguments depend on premises, and there are several different assumptions that we can make in thinking about death. The first is whether we continue in some form or other after bodily death or not. If we assume that we don’t, one set of arguments ensues. If we assume that we do, the next questions are theological. Is there a God who will reward or punish us for our deeds in this life? If so we better figure out how to get rewarded. If not, we better figure out what else we need to do to end up in a happy state. Philosophers over the years have given different answers to these questions.

What if we don’t believe that we live on after the body dies? The ancients had an answer for that. A hundred or so years after Plato, Epicurus said we have no reason to fear death because we won’t be there to experience it! “Death does not concern us,” he said, “because as long as we exist, death is not here. And when it does come, we no longer exist.”(1)

Epicurus was a materialist; in his view reality is, fundamentally, material stuff. But what of the soul, or what we nowadays call the mind? Epicurus said it is identical to an organ in the body. He knew far less about physiology and neurology than we do now and thought the organ of thought resided in the chest. Now we say it is the brain. Such details aside, the point is that when the body dies, that organ dies and the mind or soul goes with it. There is nothing to fear about being dead because there will be nobody to experience that condition. Being dead is the complete absence of experiential mental states; it is an experiential blank. It won’t hurt; it won’t be pleasant; it won’t be anything. Hence, there is no reason to fear it.(2)

An extension of Epicurus’ argument proposed by his follower Lucretius says that the state of being dead is just like the state before being born; there is no reason to fear either one. Lucretius says,

Look back now and consider how the bygone ages of eternity that elapsed before our birth were nothing to us. Here, then, is a mirror in which nature shows us the time to come after our death. Do you see anything fearful in it?(3)

Heidegger agrees with Epicurus but has a different take on it. Being dead is the one aspect of human existence that cannot be described from a phenomenological, first-person point of view. You can’t even imagine it. But the human being (Dasein in his terminology), knowing that death is inevitable, can take an authentic stand toward his or her own life. The possibility of our own death is omnipresent, always there if we choose to pay attention to it. To live authentically is to live in the knowledge of our own finitude, a knowledge that allows each of us to make of our lives something of our own, not just something dictated by others—culture, family, school, religion, etc.—, which Heidegger calls the “they” (das Man).(4)

There is some question as to whether Heidegger, seemingly describing the structure of human existence generally, actually describes only his own idiosyncratic view of the world. He speaks of authentic being-towards-death as “anxious.”(5) Is being anxious a correct attitude toward life or just a morbid one? In order to make sense of Heidegger, we each need to examine our own experience and see if we find what he describes. I think that Heidegger’s anxiety is more a feature of him himself than of Dasein in general. A sense of authentic being-towards-death is better captured by poet Mary Oliver:

Doesn’t everything die at last, and too soon?
Tell me, what is it you plan to do
with your one wild and precious life?(6)

Instead of feeling anxious about being dead, we can feel the excitement of making something of our life. In either case, if we are convinced that we will experience nothing after death because we won’t be around to experience anything, then Epicurus’ advice is cogent. There is no need to fear or worry about being dead.

That’s the rational position. But not many of us are entirely rational when it comes to contemplating our own death. Perhaps it’s because our animal bodies cling to life regardless of what we think, or perhaps it’s because we aren’t as clear-headed as Mary Oliver, but contemplating our own death does give most of us pause. The prospect of our death might fill us with regret at having to leave behind things or people, or perhaps the whole world, for which we have some fondness. We might fear having left unfinished something we wanted to accomplish or having left unreconciled a relationship that has become strained. Or we might just feel Heidegger’s vaguely unfocused anxiety. A poignant case in point is philosopher Herbert Fingarette, who lived a full and meaningful life and wrote a book on death in which he came to the same conclusion as Epicurus. But, as the short documentary “Being 97” reveals, at the end of his life he did indeed fear death and was puzzled and saddened by his failure to find the point of existence.(7)

The alternative to thinking that death is mere non-existence is to think that something, a soul or mentality or a point of view of some kind, does continue after the body dies. If you have such a belief, you expect to find yourself in a world after you die. That world will be different no doubt from the one you are in now, but you expect to have something before you, something to engage with. In short, life continues after death. In fact, death is not death, but only a transition into what we might call an afterlife. Then the question becomes not whether to fear death, but whether to fear the afterlife. Depending on your beliefs about what you think will happen and your assessment of how your life has gone, the prospect can be hopeful or terrifying.

Socrates said that if you have prepared yourself, you should welcome your transition to a better state. As portrayed by Plato in the Phaedo, Socrates says that the true philosopher should have no fear of death at all, as his whole life has been a preparation for that very event. According to accepted belief of the time, when you die, your soul separates from and leaves behind your body. The body dies but the soul lives on; and the philosopher’s soul, unencumbered by bodily distractions, can then enjoy the pleasure of pure knowledge of the Just, the Good, the Beautiful and so on.(8)

The Gnostics of the first couple of centuries of the Christian era had a similar view. They thought that this material world we live in was basically a sort of prison created, not by the supreme Godhead, but by a demented or at least incompetent lower god. We find ourselves thrown into a world à la Heidegger, but the world thwarts our desire to make sense of life and to actualize ourselves authentically because it is the result of the malignant designs of an inferior deity. While nature is, for modern Existentialism, merely indifferent, for the Gnostics it was actively hostile toward the human endeavor. Fortunately, there was a way out, at least for certain advanced souls. Such a soul could receive a supra-cosmic revelation in the form of a vision that would reveal the knowledge (gnôsis) that humankind is alien to this realm and possesses a “home on high” within the plêrôma, the Fullness, where all the rational desires of the human mind come to full and perfect fruition. Much like the ascetic philosopher idealized by Plato, the Gnostic strove to dissociate himself or herself from the material world. If successful, you could achieve some degree of release from suffering in this world, and even more so in the next. Death for the Gnostic, as for Socrates, was to be welcomed, at least if you were suitably prepared.(9)

And, of course, there is no shortage of alarming accounts of what will happen to you if you are not suitably prepared. A well-known example is the sermon by Christian preacher Jonathan Edwards in 1741, “Sinners in the Hands of an Angry God,” in which he warns those who fail to accept the grace of the Christ that they are in grave peril:

The Wrath of God burns against them, their Damnation do[es]n’t slumber, the Pit is prepared, the Fire is made ready, the Furnace is now hot, ready to receive them, the Flames do now rage and glow. The glittering Sword is whet, and held over them, and the Pit hath opened her Mouth under them. The Devil stands ready to fall upon them and seize them as his own.(10)

Terrifying indeed, and one reason why those who don’t like attempts to motivate by fear shun apocalyptic religions.

I could go on and on with examples, as the belief in life after death is widespread throughout human history. From primitive ancestor worship to present-day theistic religions, some themes are common:

  • There is something amiss about our life in this material world.
  • It can be better or worse in the afterlife.
  • Your state in the afterlife depends on how you comport yourself in this life.

I suspect that a great deal of people’s fear of death has to do with fear of going to hell or being punished in some way in the next life. Religious traditions tell us how to behave here in order to be in a good place there. The way to avoid fear of death, they say, is to do what the scriptures, teachings and elders say to do in order to end up in a happy state in the afterlife. Fear is appropriate if you believe that you have not fully lived up to what is required of you. Confidence is appropriate if you have been righteous and obedient. My mother, the wife of a Presbyterian minister, told me serenely shortly before her death, “I’ll be taken care of.”

The specifics of what is mandated by religion vary from culture to culture, but some of those teachings might indeed be divinely inspired. If you suspect that something happens to you after death but don’t want to blindly accept what you have been told without careful consideration, you can compare the teachings of various traditions and find those that are common to many or seem to be good advice for life in general. I’m thinking of things like treating others as you would have them treat you, helping the poor and needy, avoiding obsession with material things and the like. One of my favorites is from the prophet Zoroaster, who taught that Ahura Mazda, the supreme Wise Lord, desires our welfare. To that end, the Wise Lord commands us to have good thoughts, good words and good deeds.(11)

Most religions are dualistic, viewing the world as divided into opposites such as good and evil, body and soul, material life and spiritual life and the like. Within them, though, we find strains of mystical monism, the belief that despite the appearance of variety, in fact all is one. For the mystic, the transition to the afterlife is neither a calamitous loss of this life nor a triumphant gain of the next. Instead, it is but a step in the soul’s journey toward the One from whence it came.

This idea comes from the Sufi mystic Hazrat Inayat Khan. Before elaborating, let me acknowledge that I cannot speak from experience here, as I don’t have any personal memories of having died, nor of being in an afterlife. But I am convinced that life can continue after the death of the physical body. My daughter communicated with me shortly after she died in a car accident, and there was enough independent confirmation from various people to lead me to believe that it was not just a hallucination or wishful thinking. Please see my essay “An Impeccable Death” for the details.(12) Once I visited what is now a museum in Istanbul but was in former times a tekke, a gathering place for Sufi ceremonies of music and movement. I had a powerful sense of familiarity, as if I had been there before. There is no objective proof, of course, but I feel no hesitation in taking seriously what Inayat Khan says about the journey of the soul.

Now, about the soul: there has been much controversy about what the term “soul” means, whether the soul (whatever it is) exists, whether we have one, whether we are one and so forth. I don’t intend to resolve such questions; I just stipulate that what I mean by the term is the unobservable center around which the experience of each of us is organized and from which our actions emanate.(13) When we transition to the afterlife, the soul is what experiences whatever is there and acts in response.

And what will we find there? Basically, what we bring with us. We don’t bring anything material, of course. Nothing that has mass accompanies us to the afterlife. What does accompany us is intangible: our beliefs; our character; our habitual way of approaching the world and our emotional attitude toward it; the way we treat other people; in short, our personality.

Inayat Khan says that the world that appears to us then is influenced by what we believe now. A Christian finds a Christian world; a Hindu, a Hindu world; a Muslim, a Muslim one.(14) Those from other traditions or who espouse none will find different worlds, each a continuation of what they expect or hope or fear in this life. In short, what is left behind is material stuff, and what comes with us is a function of what we carry in our mind. Inayat Khan says,

Before the soul now is a world, a world not strange to it, but which it had made during its life on the earth. That which the soul had known as mind, that very mind is now to the soul a world; that which the soul while on earth called imagination is now before it a reality.(15)

According to Inayat Khan, the afterlife can be a heaven or a hell. What we can do now to influence the outcome is to cultivate the kind of world we would like to be in and to train ourselves to be the kind of person we would like to be while in that world. He continues,

What will be the atmosphere of that world? It will be the echo of the same atmosphere which one has created in this. If one has learned while on earth to create joy and happiness for oneself and for others, in the other world that joy and happiness surrounds one. And if one has sown the seeds of poison while on earth the fruits of these one must reap there.(16)

Your personality goes with you, so cultivate a beautiful and harmonious personality in this life, says the Sufi sage. (This is not a moral commandment, by the way, just very good advice.) Indeed, he devotes much of his writing to what he calls the Art of Personality, the point of which is to become a person who brings heavenly blessings wherever he or she goes. Heaven and hell are not reserved for the afterlife.

It is not that God from His infinite state rewards us or punishes us, or that there is one fold or enclosure called heaven, in which the virtuous are allowed to be, and another called hell, in which all the sinners are penned. In reality we experience heaven and hell in our everyday life all the time.(17)

And this brings us back to the original question, whether it is rational to fear the state of being dead. For those who believe that this one life is all we get and for those who believe that we live on after death, the advice is the same: cultivate tranquility and benevolence here and now. Become a person who radiates and embodies love, harmony and beauty.



(2) O’Keefe, “Epicurus.”

(3) Lucretius, Book III, vv. 972-75.

(4) Wheeler, “Martin Heidegger.”

(5) Heidegger, Being and Time, p. 311.

(6) Oliver, “The Summer Day.”

(7) Hasse, “Being 97.”

(8) Plato, Phaedo, 64a-67e. I say “his” because in Plato’s time philosophers were mostly male.

(9) Moore, “Gnosticism.”

(10) Edwards, “Sinners in the Hand of an Angry God.”

(11) Rose, Zoroastrianism, An Introduction, p. 18. See also Meacham, “Learning from Masters.”

(12) Meacham, “An Impeccable Death.”

(13) Meacham, How To Be An Excellent Human, p. 60.

(14) Khan, “Aqibat, Life After Death,” pp. 54-55.

(15) Khan, “The Soul, Whence and Whither,” p. 165.

(16) Ibid., p.168.

(17) Khan, “Aqibat, Life After Death,” p. 57.


Edwards, Jonathan. “Sinners in the Hand of an Angry God.” Boston: Kneeland and Green, 1741. Libraries at the University of Nebraska-Lincoln. Electronic Texts in American Studies. Online publication as of 25 February 2019.

Hasse, Andrew. “Being 97.” Online video publication as of 20 February 2019.

Khan, Inayat. “Aqibat, Life After Death.” The Sufi Message of Hazrat Inayat Khan Volume 5, pp. 37-78. London, Barrie and Jenkins, 1973. Available online at as of 25 February 2019.

Khan, Inayat. “The Soul, Whence and Whither?” The Sufi Message of Hazrat Inayat Khan Volume 1, pp. 107-186. London, Barrie and Jenkins, 1973. Available online at as of 25 February 2019.

Heidegger, Martin. Being and Time. Tr. Macquarrie, John, and Robinson, Edward. New York: Harper and Row HarperSanFrancisco, 1962.

Lucretius. On The Nature Of Things. Tr. Martin Ferguson Smith. Cambridge: Hackett, 2001.

Meacham, Bill. “An Impeccable Death.” Online publication as of 25 February 2019.

Meacham, Bill. How To Be An Excellent Human: Mysticism, Evolutionary Psychology and the Good Life. Austin: Earth Harmony, 2013. Available at

Meacham, Bill. “Learning from Masters: Ethics and Cosmology in Zarathustra and Hazrat Inayat Khan.” Online publication as of 25 February 2019.

Moore, Edward. “Gnosticism.” The Internet Encyclopedia of Philosophy. Online publication as of 20 February 2019.

O’Keefe, Tim. “Epicurus.” The Internet Encyclopedia of Philosophy, ISSN 2161-0002. Online publication as of 16 February 2019.

Oliver, Mary. “The Summer Day.” New And Selected Poems. Boston, Beacon Press, 1992, p. 94.

Plato. Phaedo, tr. Hugh Tredennick. In The Collected Dialogues of Plato, ed. Edith Hamilton and Huntington Cairns. New York: Pantheon Books, 1961.

Rose, Jenny. Zoroastrianism, An Introduction. London and New York: I.B. Taurus, 2011.

Wheeler, Michael. “Martin Heidegger”. The Stanford Encyclopedia of Philosophy Winter 2018 Edition, ed. Edward N. Zalta. Online publication as of 17 February 2019.

Jan 30 19

How To Be An Excellent Human available online

by Bill Meacham

The full text of my book How To Be An Excellent Human is now available online. I have probably made about as much money as I am ever going to by selling the physical book, so now I am making it freely available. The point was never to make a lot of money but to get my ideas out to the world. Feel free to download it and share it with friends.

Here is where to find it:

Here is a summary of the book:

How can we live good, fulfilling lives? How can we be happy? These questions have been at the forefront of philosophy ever since Socrates, and this engaging book attempts an answer. It addresses the big questions of life:

  • How should we live our lives?
  • How should we decide how to live our lives? How should we even frame the question in the first place?
  • What is it to be human? What are we like, how do we function?
  • What is our place in the universe? How do we fit into the bigger picture? What is the bigger picture, the basic nature of all of reality?

The book is exciting and wide-ranging. It is philosophy, but don’t let that scare you off; it is philosophy made accessible to the general reader. The author is equally at home lucidly explaining how mystics make sense when they say that all is one and how evolution has provided us with powerful but fallible mental capacities. The book offers an exciting journey with stops along the way to consider consciousness, panpsychism, brain science, quantum physics, how we are like and unlike chimpanzees and bonobos, where morality comes from, how our emotions both guide us and trip us up, how our thinking works, how it sometimes fails and what we can do to fix it. Throughout, it recommends an approach to life that maximizes well-being, leading to the possibility of happiness and abundance for all.

The book covers a lot of ground, but it is quite approachable. You can read it straight through as an intellectually exciting story. Or you can dive in anywhere, dipping into chapters that pique your interest. In either case you will have fun reading it, and you will be rewarded with insights and ideas that will stimulate and delight your thinking.

Jan 11 19

On Consciousness (grumpy)

by Bill Meacham

I suppose my insistence on clarity of language about consciousness makes me a bit of a curmudgeon—or perhaps a bellyacher, crab, crank, grump or whiner—but I am appalled at some of the things people say about the topic. Here is an example:

Psychology professors Peter Halligan and David Oakley assert that being conscious is merely a byproduct of brain processes, a respectable position in philosophy of mind called Epiphenomenalism.(1) But when they try to say what they are talking about, all they do is repeat synonyms:

We all know what it is to be conscious. It is, basically, being aware of and responding to the world.

… while undeniably real, the “experience of consciousness” or subjective awareness is precisely that – awareness. No more, no less.

… subjective awareness [is] the intimate signature experience of what it is like to be conscious….(2)

So being conscious is being aware, being aware is having experience, and having experience is being conscious. These definitions are ridiculous. They are completely circular and shed no light on the subject. The problem is that “conscious” and “aware” are largely synonymous, which becomes apparent when you try to translate them into German or Spanish or Portuguese or any other language that has only one word where English has two. As Wittgenstein said, we are bewitched by our language.(3)

What should the authors have said instead? I have written a whole paper on the subject of how to speak about being conscious, which I’m told is fairly clear. Rather than summarize it, I urge you to read the paper itself.(4) In what follows I condense the authors’ argument and rephrase it in what I think is better terminology.

We all know what it is to be conscious. The world appears to us vividly, and we respond to it. The world includes public things such as trees and people and private things such as our thoughts and feelings. Some thoughts and feelings are conscious, meaning that they appear to us vividly and we can notice and focus on them. Others are less vivid; figuratively, they are in a sort of periphery. Some are so dim as to be not noticeable at all, and we call them unconscious. Here is a picture:

Many people think that we can control our conscious thoughts and feelings, and that they in turn can cause us to act in certain ways. But modern neuroscience tells us that that is not so.

The rest of the argument is clear enough in the authors’s own words:

There is now increasing agreement that most, if not all, of the contents of our psychological processes – our thoughts, beliefs, sensations, perceptions, emotions, intentions, actions and memories – are actually formed backstage by fast and efficient nonconscious brain systems. … Continuing to characterise psychological states in terms of being conscious and non-conscious is unhelpful.(5)

The authors conclude that conscious psychological processes and unconscious psychological processes are functionally the same; they are both caused by physical events in the brain. Whether they are conscious or not makes no difference in their causes or what they do. The only difference is that some are presented to us vividly enough that we notice and pay attention to them, and some aren’t.

That’s the argument. Whether it holds up or not is for another time. My only point in this essay is that it is quite possible to state the case in terms that are not circular and not ambiguous. Go forth and do likewise.


(1) Robinson, “Ephiphenomenalism.”

(2) Halligan and Oakley, “What if consciousness is just a product of our non-conscious brain?”

(3) Wittgenstein, Philosophical Investigations, §109.

(4) Meacham, “How to Talk About Subjectivity (Don’t Say ‘Consciousness’)”.

(5) Halligan and Oakley.


Halligan, Peter, and David A. Oakley. “What if consciousness is just a product of our non-conscious brain?” Online publication as of 9 January 2019.

Meacham, Bill. “How to Talk About Subjectivity (Don’t Say ‘Consciousness’)”. Online publication and as of 9 January 2019.

Robinson, William. “Epiphenomenalism.” The Stanford Encyclopedia of Philosophy (Fall 2015 Edition), Edward N. Zalta (ed.), Online publication as of 9 January 2019.

Wittgenstein, Ludwig. Philosophical Investigations, 3rd Edition. Tr. G.E.M. Anscombe. Oxford: Basil Blackwell, 1968 (1986). Online publication as of 25 October 2018.

Nov 21 18

The Game

by Bill Meacham

I recently learned of a game called The Game, the rules of which pose interesting philosophical questions. Here are the rules:(1)

  1. Everybody in the world who knows about The Game is playing The Game. A person cannot decline to play The Game; it does not require consent to play and you can never stop playing.
  2. Whenever you think about The Game, you lose.
  3. Losses must be announced. This can be verbally, with a phrase such as “I just lost The Game”, or in some other way, for example on social media or by holding up a sign.

OK, now you know about The Game and you are thinking about it. You lose. Sorry about that.

How do you feel about the assertion that you lose? Some typical reactions are curiosity, amusement, befuddlement, indifference and annoyance. I find The Game intellectually engaging and choose to address some of its philosophical and psychological issues. As you read this essay you will continually lose The Game, but don’t worry. There’s no penalty for doing so.

Is It a Game?

First, is The Game really a game? Some have called it a “mind virus.”(2) It is certainly a meme in Richard Dawkins’ original sense of a unit of cultural transmission.(3) But what does it have in common with other games, such as chess or football or ring-around-the-rosie? Here are some definitions of the term “game”:

A game is a structured form of play …. Key components of games are goals, rules, challenge, and interaction.”(4)

The Game has rules and interaction, but what is its goal? In many games the goal is to win, but it seems impossible to win The Game by deliberately trying to win.

A game is commonly defined as one or more players trying to achieve an objective ….”(5)

Again, what is the objective? It can’t be to win, as there is no way to do so. We have to look beyond the rules to the context in which The Game is played. Some people take the objective to be to infect as many people as possible with the mind virus. For others the objective seems to be simply to have fun with friends or potential friends and affirm a sense of community with them. At a comic book convention, for instance, or a science fiction convention or the like, someone may exclaim “Oh rats, I just lost The Game,” thereby provoking others to groan and admit that they lost it as well.

Ludwig Wittgenstein pondered the nature of games and asserted that there is no essence of game, nothing that uniquely identifies games. Instead, games bear a “family resemblance,” as he called it, to each other. They have a series of overlapping similarities, but no one feature is common to them all. Each game resembles at least one other, but no feature is common to all games and only games.(6) Given this approach, it is safe to say that The Game is indeed a game.


A crucial feature of The Game is that it is self-referential. Playing it requires some degree of second-order thinking, also called self-awareness or metacognition. You have to notice that you are thinking of The Game in order to announce that you have lost. Not only that, you do so ironically. You announce your loss as if dismayed, but you are not really dismayed. You actually kind of enjoy announcing it. Not only do you know that you have thought of The Game and thus lost, you also know that you don’t really mind losing, but you pretend you do. This capacity for self-awareness is the uniquely human virtue, what human beings do that other beings don’t or don’t do nearly so well.(7) Socrates said that you must know yourself in order to have a life worth living. The Game is one way humans have fun being human.

Ironic Process

The Game is a variant of what is called “ironic process,” whereby deliberate attempts to suppress certain thoughts make them more likely to surface.(8) The process is ironic because it produces an effect contrary to what is desired. You can try to win by not thinking about The Game, but that’s difficult. Fyodor Dostoevsky wrote “Try and set yourself the problem of not thinking about a polar bear and you will see that the damned animal will be constantly in your thoughts.”(9) Researchers have found that when we try not to think of something, one part of our mind does avoid the forbidden thought, but another part “checks in” every so often to make sure the thought is not coming up, therefore, ironically, bringing it to mind.(10)

Now, in fact there is a way to avoid thinking about a polar bear, and that is to think very hard of something else instead. Imagine a black bear or an elephant or some other animal. Imagine this animal dancing around. With sufficient focus, you can avoid thinking of the polar bear. No doubt it is a bit difficult and not something most of us do much, but we are not helpless before our thoughts. I once did it by repeating to myself over and over “There is something of which I must not think. There is something of which I must not think.” After a while I stopped and could not remember what it was! It came to me later, and now I have forgotten it altogether, but for a time I was successful.

The ability to focus your thoughts, to exert some control over them, is of profound importance. A Sufi mystic says,

He who does not direct his own mind lacks mastery. … If he does not control his mind, he is not a master but a slave. … Mastery lies not merely in stilling the mind, but in directing it towards whatever point we desire, in allowing it to be active as far as we wish, in using it to fulfill our purpose, in causing it to be still when we want to still it. He who has come to this has created his heaven within himself; he has no need to wait for a heaven in the hereafter, for he has produced it within his own mind now.(11)

Are You Playing?

Now here is a conundrum: If you know about The Game, and you think of it but don’t announce that you have lost, are you playing the game? Arguments can be made for both alternatives, that you are and that you aren’t.

Abstractly, if you think of The Game as a set of rules, the first of which is that you can’t refuse to play once you know about The Game, then you are indeed playing The Game when you know that you have thought of it, whether or not you announce your loss. There are different concrete scenarios in which this situation can play out.

Firstly, you might just forget. You might think of The Game—that is, it might idly occur to you, or you might hear someone mention it or you might think about it abstractly as I am doing in this essay—but forget that you thereby lose. In that case you are playing the game but not correctly.

Secondly, you might cheat. You cheat if you think of The Game and remember that you are supposed to announce your loss but don’t. You lie by omission. You might try to lie overtly and say that you have won The Game, but then everyone would know that you are grossly mistaken about the rules. To lie and not get caught, you have to remain silent. By remaining silent, you signal to others that you don’t know about The Game. (And if they aren’t thinking about The Game, they don’t even recognize your signal.) On this interpretation of when The Game is being played, remaining silent when you are supposed to announce your loss is playing The Game, but cheating at it. Do we say of someone who cheats that they aren’t playing the game? No, we say that they are playing, but not correctly. You participate in The Game by cheating. If you weren’t participating at all, you would have no thought of not participating and would not be cheating. By choosing to cheat, you participate in The Game.

On the other hand, you might refuse to announce your loss because you have decided not to play The Game. Perhaps you find it silly, or it once seemed like too much bother so you didn’t speak and now silence has become a habit, or you are just ornery and don’t want to play. One presenter at a convention got so angry at people interrupting the proceedings with their announcements that they had lost The Game that he made something of a crusade of opposing it.(12) Are you playing The Game if you deliberately decide not to? A case can be made that in that case you are not playing.

The first rule of The Game is that you can’t avoid playing, so even if you decide you don’t want to, you can’t help it. But who is to say that you have to obey the first rule? What if we say that to play The Game you have to obey all the rules? In that case by not obeying the first rule, you avoid playing The Game! How could we justify the rule that you must obey all the rules? Is that one a rule of The Game? You can’t justify it by appealing to a further rule, as doing so would get you into an infinite regress: you could only justify the further rule by a yet further one, and so on ad infinitum.(13)

Wittgenstein would say that the only way to justify having to play by the rules is by appeal to the practices of the players, their customs and their uses of the game.(14)

The Game is a social construct, played with others. By not interacting in the prescribed manner, you don’t participate in it. By your silence you avoid playing. If others announce that they have lost The Game and you don’t, and they have reason to believe that you know about The Game, then they know that you are refusing to play. They know (or are convinced or strongly suspect) that the idea of The Game has come to your mind, and they can see that you have not announced your loss, so they are justified in believing that you are deliberately not playing. (But of course in that case, they might just decide that they don’t want to play with you either and go off without you. Maybe you should play in order to avoid missing out on further fun.)

It seems clear on this view that you are not playing the game. But the other players might say that you are too playing the game, and you are just deluded into thinking you are not. So maybe it is not so clear after all.

Now, which argument is stronger, the one that says you are playing The Game when you don’t announce your loss or the one that says you aren’t? There seems to be no clear answer. The argument is about the meaning of concepts and how to apply them, a favorite topic among philosophers. Let’s apply the pragmatic method of William James, who in common with Wittgenstein aimed at cutting through conceptual confusion. James says, “The pragmatic method … is to try to interpret each notion by tracing its respective practical consequences. What difference would it practically make to any one if this notion rather than that notion were true?”(15) The practical consequence of saying that you are playing the game is to affirm the solidarity of the community of players. The practical consequence of saying that you are not is to affirm the freedom of the individual. The answer depends on your point of view and your desired outcome. Beyond that, dispute is idle. But idle dispute is not useless. The advantage of such an undecidable question is that it enables those who enjoy discussion to keep talking. They get to keep playing the philosophy game.


Well, who would have thought there was so much to say? Is there a point to all this? No, it’s just a game.



(1) Wikipedia, “The Game (mind game).” Another formulation of the first rule is that everyone in the world is playing The Game, but I don’t see how you can play a game you never heard of.

(2) Haywood, “Lose The Game.”

(3) Dawkins, The Selfish Gene, p. 192.

(4) Wikipedia, “Game.”

(5) Haywood, “Lose The Game.”

(6) Wittgenstein, Philosophical Investigations, §65-71.

(7) Meacham, How To Be An Excellent Human, chapter 20.

(8) Wikipedia, “Ironic process theory.”

(9) Dostoevsky, Winter Notes on Summer Impressions, p. 62.

(10) Winerman, Lea, “Suppressing the ‘white bears’.”

(11) Khan, “Stilling The Mind,” pp. 126-127. The author wrote before there were efforts to remove gender discrimination from common usage. Out of respect for historical sources, I have left the language as it was originally given and offer sincere apologies to any who feel alienated or offended by the choice of words. Certainly the author intended to include everyone.

(12) Dorn, “Finding Five Dollars.”

(13) Carroll, Lewis, “What the Tortoise Said to Achilles.”

(14) Wittgenstein, Philosophical Investigations, §197-202.

(15) James, William, “What Pragmatism Means,” p. 142.



Top: as of 16 November 2018. San Diego Comic-Con 2008 day 1. The person pictured is Raven Myle Aurora. Photo by Jason Mouratides from Portland, Oregon, USA. CC BY 2.0.

Middle: as of 16 November 2018.

Bottom: as of 16 November 2018. Text: “I’m as surprised as you! I didn’t think it was possible.”



Carroll, Lewis. “What the Tortoise Said to Achilles.” Mind 4, No. 14 (April 1895): 278-280. Online publication as of 12 November 2018.

Dawkins, Richard. The Selfish Gene. New York: Oxford University Press, 1976.

Dorn, Trae. “Finding Five Dollars (Why ‘The Game’ is Dumb).” Online publication as of 16 November 2018.

Dostoevsky, Fyodor. Winter Notes on Summer Impressions. Tr. Kyril FitzLyon. London: Quartet Books, 1985.

Haywood, Jonty, et. al. “Lose The Game – The World’s Most Infamous Mind Virus.” Online publication as of 16 November 2018.

James, William. “What Pragmatism Means.” In Essays In Pragmatism. Ed. Aubrey Castell. New York: Hafner Publishing Co., 1948 (1961). Online publication as of 16 November 2018.

Khan, Inayat. “Stilling The Mind.” In The Sufi Message of Hazrat Inayat Khan, Volume VII, In An Eastern Rose Garden. London: Barrie and Jenkins, 1973. Online publication as of 16 November 2018.

Know Your Meme. “The Game.” Online publication as of 6 November 2018.

Meacham, Bill. How To Be An Excellent Human: Mysticism, Evolutionary Psychology and the Good Life. Austin, Texas: Earth Harmony, 2013. Available at

Wikipedia. “Game.” Online publication as of 16 November 2018.

Wikipedia. “Ironic process theory.” Online publication as of 16 November 2018.

Wikipedia. “The Game (mind game).” Online publication as of 15 November 2018.

Winerman, Lea. “Suppressing the ‘white bears’.” American Psychological Association Monitor on Psychology. October 2011, Vol 42, No. 9, page 44. Online publication as of 17 November 2018.

Wittgenstein, Ludwig. Philosophical Investigations, 3rd Edition. Tr. G.E.M. Anscombe. Oxford: Basil Blackwell, 1968 (1986). Online publication as of 25 October 2018.

Nov 5 18

Reassessing Morality Part 2

by Bill Meacham

(This is the second of a two-part series. In the first part I argued that morality is best conceived as a socially constructed reality.)

Part II: The Practice of Morality

When we recognize the socially constructed status of moral rules, responsibilities, obligations, prohibitions and the like we may find ourselves in a bit of a quandary: what to do with our new understanding. We understand that these things do not, in fact, apply universally. Now we have a choice: shall we take them to apply to us? We could, it seems, just ignore them, or ignore the ones we don’t like. But on what basis would it be rational to ignore them, and which ones?

Morality of some sort is necessary for human existence, for we cannot live without others of our kind. Zoologists classify the human species as “obligatorily gregarious.”[1] We must have ongoing and extensive contact with our fellows in order to survive and thrive, and morality governs those interactions. Suppose we wanted to devise a moral system for universal use. On what rational basis could we choose the rules of that system?

It is theoretically possible to opt out of socially constructed reality in a way that we cannot opt out of physical and mathematical/logical reality. If everybody by some magical trick stopped believing in physical reality, it would assert itself anyway. Even if we believed we could, we could not in fact walk through a tree. The same goes for mathematical/logical reality. The square root of nine would still be three even if nobody believed it. But if everyone stopped believing in money, we would have no money. We would have only bits of paper and metal. Similarly, it seems that we could opt out of morality, although doing so would be quite difficult.

It would be difficult because socially constructed reality is not merely fictional; it is, in its own way, real. Powerful evolutionary forces have instilled in us a sense of morality; we can’t just wish it away. Moral entities, and institutional facts in general, have a peculiar nature: they compel our behavior even though we, in a sense, just make them up. They compel our behavior because they seem really to be there. Approaching the issue not analytically but from the point of view of a member of society, sociologists Peter Berger and Thomas Luckmann observe that institutional facts are “experienced as possessing a reality of their own, a reality that confronts the individual as an external and coercive fact.” The social world appears to each of us “in a manner analogous to the reality of the natural world … as an objective world.”[2] The socially constructed entities may exist only because we believe they do, but we believe they exist because they seem really to be there. And, for most of us, they continue to seem really to be there even after we recognize their socially constructed nature, much as an optical illusion still fools us even when we know that it is only an illusion.

It is no small thing to be an institutional fact. To minimize the importance of morality by saying that it is “just” socially constructed is to overlook its emotional and motivational force on us. You can remove yourself from some institutions, e.g. marriage, but to do so you generally need to do it with other people. In other words, you create an alternative social institution. Some communes may try to do away with money, but most of them have to interact with the outside world, which forces them to deal with money anyway. And yet, recognizing the socially constructed nature of morality opens a possibility that was not apparent to us before.

Before we think about it much, we treat moral rules as constraining our conduct because we take them for granted. Their socially constructed character is invisible to us, largely because our acceptance of them is not something we do deliberately. We are taught the moral rules by parents, elders and educators in our society. Just as we take money, marriage, government, property and the myriad other institutional facts as real, so we take moral rules as objectively real. We question them only when cracks in the structure of our social reality confront us, as illustrated by moral conflicts such as those mentioned in Part I. And many of us don’t even question them then.

But for those who do, a sort of spell is broken. Intellectually, we do not see our world the same way as before; we are no longer taken in by moral reality. Once we understand that morality is socially constructed, we have the freedom to buy into it or not. We are able to choose, within the constraints of our emotional and social conditioning, which duties to obey. This freedom can seem like a burden because emotionally we still feel the force of these moral intuitions. We may know intellectually that it is not always wrong to steal things, but we still cringe a bit at the thought of doing so.

Philosophically, the question of whether to obey certain moral rules and not others or to include certain ones but not others in a deliberately constructed moral system cannot be answered in the context of the moral rules in question, because to do so would be already to assume the answer. We need some other way to resolve the issue. The resolution can come by recognizing a further fact about rules for behavior: they are not all socially constructed.

Moral rules are socially constructed, but other rules are not: prudential or practical rules variously called “maxims,” “policies,” “rules of thumb” and the like. We do not have to evaluate our actions in terms of moral rightness and wrongness; we can instead evaluate them in terms of the benefits or harms of their consequences. Moral rightness is socially constructed. The effects of our actions are not.

Morality and Prudence: Rightness and Goodness

Morality and prudence are two ways of thinking about ethics. (By “ethics” I mean the evaluation of conduct generally. Morality and prudence are subsets of ethics.) Prudence is the exercise of rationality to promote one’s own interests. To act prudently is to act wisely and rationally in order to achieve one’s goals. I want to use the term “prudence” is a slightly more extended sense, as one’s chosen goals might not always be in one’s actual interest.

To approach understanding the difference between morality and prudence, we can put the matter in linguistic terms. They are manifested as two clusters of concepts and language used to command or recommend specific actions or habits of character. We can call them rightness and goodness. The rightness paradigm recognizes that people live in groups that require organization and regulations, and frames values in terms of duty and conformance to rules. The goodness paradigm recognizes that people have desires and aspirations. It frames values in terms of what enables a being to achieve its ends. The right has to do with laws and rules; the good, with achievement of goals. Rightness and goodness are two alternative ways of organizing the whole field of ethics to carry out the tasks of evaluating conduct, both in particular cases and in general types.[3] Both judgments of rightness and wrongness and judgments of goodness and badness can apply to particular actions, to types of actions, and to the habits of conduct that make up a person’s character.

Morality exemplifies the rightness paradigm, which uses the terms “right” and “wrong” to evaluate conduct. Some synonyms for “right” are “proper,” “moral” and “permissible.” Some synonyms for “wrong” are “improper,” “immoral” and “impermissible.” Morality is not the only kind of rightness. Others are law, which consists of legal rules enforced by the threat of physical coercion, and etiquette, social rules enforced solely by praise and blame. It is obvious that law and etiquette are socially constructed. As we have seen, it is reasonable to believe that morality is too.

Prudence exemplifies the goodness paradigm. That paradigm uses the terms “good” and “bad” to evaluate not only conduct but also things, people, states of affairs, etc., as well as maxims or guidelines for conduct. Some synonyms for “good” are “helpful,” “nourishing,” “beneficial,” “useful” and “effective.” Some synonyms for “bad” are their opposites: “unhelpful,” “unhealthy,” “damaging,” “useless” and “ineffective.”

Something that benefits something or someone we call good for that thing or person. Such goodness may be instrumental or biological. Instrumentally, a hammer is good for pounding nails, and what is good for the hammer is what enables it to do so well. Biologically, air, water, and food are good for living beings.

To make sense, an instrumental usage requires reference to someone’s purpose or intention. Thus, a hammer is good for pounding nails, and you pound nails in order to build things such as furniture or housing. Your intention is to acquire the comfort and utility these things afford you. That is your goal, or end, and the good is what helps bring it about.

The biological usage does not require reference to purpose or intention. It is expressed in terms of health and well-being. That which nourishes a living thing is good for it. The good, in this sense, is that which enables a thing to function well, that is, to survive, thrive and reproduce. (The function of a living thing is, intrinsically, to survive and reproduce.[4] Living things also have functions external to themselves in their habitat or biosphere, such as to provide shelter or nutrients or other goods to other living things. Here I mean function in the intrinsic sense.)

The instrumental usage intersects the biological when we consider what is good for something that is itself good for a purpose or intention. For instance, keeping a hammer clean and sheltered from the elements is good for the hammer and enables the hammer to fulfil its instrumental function. In the instrumental sense as well, the good is that which enables a thing to function well.

If someone says something is good, you can always ask “Good for whom? Good for what and under what circumstances?” If someone says something is right, you can always ask “According to what rule?” The two domains of discourse really are separate, and it is not useful to mix them. Mixing them is a form of category error. That something has good effects does not make it right. That something is in accordance with a moral rule does not make it good.

(As a caveat, let me say that the advice to pay attention to language in this way is useful for the most part, but not universally. I am proposing a heuristic rule of thumb, a tactic for getting clarity, not an infallible recipe. Sometimes the term “good” is used in a moralistic way, and there are other meanings of the term “right,” as in the right answer to a question. We have to pay attention to what is being asserted, not just to the specific words. But by and large, the language used to assess conduct provides a good clue to the nature of the assessment.)

Rightness and goodness differ in social usage. Both moral rules and consideration of consequences are ways to say “should,” that is, ways to tell someone what he or she should do (or refrain from doing) or should have done, or to tell ourselves the same. Moral rules are called “deontic,” after a Greek word meaning duty. But the deontic is not the only type of “should.” Another type, expressed in terms of goodness, is prudential or practical. In deontic cases the “should” is a prescription or even a command. In the prudential/practical case it is a recommendation. The force of our prescription or recommendation depends on the category in which the “should” is presented.

In the case of a deontic moral “should” such as “Thou shalt not steal” (“should” being stated in its strongest form, “shall”), we feel justified in demanding that people obey the moral rule and blaming them if they don’t. The imperative has a sense of universality, that it applies to everyone.

(In the case of a legal “should” we may not only demand and blame, we may also punish the offender. In the case of a “should” of social etiquette, we may only blame, but generally not demand. Neither of these is universal; they apply only within a certain legal framework or in a certain segment of society.)

An example of a prudential/practical “should” is that for good health you should eat lots of vegetables. In this case we may not demand but may certainly advise adherence to such a “should.” And we may not blame or punish failure to comply but may say the choice is foolish. Unlike moral rules, prudential/practical advice is not always universal. In practice, it depends on context. Perhaps for a malnourished vegan eating lots of vegetables would not be good, and instead he or she should try some meat.

The importance of the distinction is this: Unlike moral rules, which are not subject to objective verification, the good is a feature of the natural world; it has to do with benefits, which are publicly observable. Prudential/practical judgments are objectively verifiable. We can do studies of the effects of diet on health, for instance, studies that provide factual evidence, so the recommendation to eat vegetables is not just someone’s opinion.

Recognizing the difference between goodness and rightness shows us a way out of the quandaries and discomfort that arise from recognizing that morality is socially constructed. And recognizing the difference also shows us a way out of intractable moral conflict. Instead of framing the issues in terms of rightness, we can frame them in terms of goodness. In other words, instead of commanding one to do the right thing, we can advise one to do what is good.

Two Questions

The advice to promote goodness raises two obvious questions: Goodness for whom? And why should we do what is good anyway? A full discussion is beyond the scope of this essay, but in general the answer to the first question is, goodness for as many people as possible, including the person acting, within the bounds of what is doable. The answer to the second question is that promoting goodness in this way benefits oneself.

The underlying principle, taken from the study of systems theory applied to ecosystems, is that an element of a system thrives when the system as a whole is healthy, and a system as a whole is healthy when its constituent elements thrive. Human beings are elements in a variety of systems, most notably systems of other people, or communities. If, in situations of conflict, we can find ways to benefit all concerned, then we ourselves will be benefited. If everyone is satisfied, then the solution will be likely to last, leading to further benefit for ourselves. Short-sighted egotistical selfishness is self-defeating. The advice to seek goodness for as many concerned as possible is a strategy based on enlightened self-interest.

By the way, the injunction to work for the greater good is not utilitarian. Utilitarianism is just another morality, defining what is right in a certain way, as the greatest good for the greatest number of people. We are not obliged to maximize the good in this way. Rather, doing so is just good advice for maximizing our own welfare.

I suppose one could ask why we should maximize our own welfare. Again, a full answer is beyond the scope of this essay, but in short we have no absolute obligation to do so. In fact, however, most people do want their own welfare. The imperative is hypothetical, not categorical: If you want to enhance your welfare, work for the good of all concerned. In the absence of a rationally compelling reason to obey any given moral rule, this principle is well suited to serve as ethical guidance.

Summary and Conclusion

We started this inquiry by noting that some conflicts, those based on differing moral intuitions, resist easy solution. People entrenched in their morality have no inclination to compromise with what they see as evil. Along the way we identified a quandary felt by thoughtful people who want to be rational: we do not recognize an obligation to act on moral intuitions when we perceive them as the social constructions that they are. But which moral intuitions shall we abide by, and which shall we discard? On what basis shall we make the decision? And we feel a further discomfort when we contemplate opting out of morality but find ourselves emotionally locked in.

The way out of these issues is to recognize that there is another whole set of criteria by which to judge actions, people, policies and so forth, a set variously called “prudential” or “practical” and referred to by the language of goodness, not rightness. We can decide to focus on goodness, on what works to promote welfare, instead of on what rigid rules insist.

To apply this advice to conflicts such as those listed above, we can ask the combatants to think about what would be beneficial for both parties. This requires some tact and diplomacy, of course, but it is worth a try. If both parties receive some benefit, a lasting peace is more likely than if one party wins and the other loses.

To apply this advice to personal moral quandaries, when we are trying to figure out what to do we can ask what good can come out of each choice, not what the right choice is.

To apply this to an approach to our conduct in general, to our character as persons, we can focus on what is beneficial as a general rule. We might want to be honest, for instance, not because of a commandment to avoid bearing false witness, but because doing so promotes harmony and good relations with others, which in turns benefit us.

Morality is certainly useful for maintaining social cohesion. Universality has its appeal, but to get a cross-cultural or universal set of moral values we would have to design it. We could more readily do so on the basis of what is good for people than on sectarian moral codes.

This essay began by listing some of the ill effects of moral conflict. Focusing on benefits for all concerned instead of on rigid morality ameliorates them. Working for the common good promotes flexibility, understanding, trust and honest communication. The first step is to frame issues in terms of goodness, not rightness. The second step is to seek the good for all concerned, not because it is our duty, but because doing so will benefit each of us in the long run.


[1] de Waal, Primates and Philosophers, p. 4.

[2] Berger and Luckmann, The Social Construction of Reality, pp. 76, 77.

[3] Edel, “Right and Good.”

[4] Foot, Natural Goodness, pp. 31-32.


Berger, Peter L. and Thomas Luckmann. The Social Construction of Reality: A Treatise in the Sociology of Knowledge. New York: Penguin Putnam Inc., 1966.

de Waal, Frans. Primates and Philosophers: How Morality Evolved. Princeton: Princeton University Press, 2006.

Edel Abraham, “Right and Good.” Dictionary of the History of Ideas. Ed.Philip P. Wiener. 1974 edition, Vol. IV, pp. 173-187. Online publication;;toc.depth=1;;brand=defaultas of 15 August 2017.

Foot, Philippa. Natural Goodness. Oxford: Oxford University Press, 2001.

Oct 12 18

Reassessing Morality

by Bill Meacham

(This is the first of a two-part series. The second part will come shortly, so stay tuned.)

Part I: The Ontology of Morality

One of the most intractable sources of conflict in human affairs is clashes of morality. No doubt there are plenty of other sources of conflict, such as resource scarcity, tribal animosity, sexual jealousy, emotional restimulation and more. But a great deal of conflict is based on differing moral intuitions. Here are a few examples:

  • A Taliban tribesman kills his daughter for taking an unsupervised walk with a young man. He thinks he was obliged to do so. We in the West consider this an appalling murder.
  • Some people want to ban all abortions, claiming that abortion is morally wrong because it is murder. Others claim that not only is abortion not murder but a woman’s right to determine the fate of her own body outweighs any other moral claim.
  • Political protesters think it is our moral duty to disobey laws that we find unjust. Their opponents think patriotism, loyalty to one’s nation and obedience to its laws, is supremely obligatory.
  • Animal rights advocates praise “no-kill” animal shelters that minimize euthanasia of unclaimed pets even as costs mount drastically. They think we have a moral obligation to avoid harm to animals. Others lament the diversion of resources that could—and, they say, should—be used to provide health and public safety services to human beings.

All these examples of moral conflict (and there are many more) show certain common features. Researcher Michelle Maiese lists five: misunderstanding, mistrust, strained and hostile communication, negative stereotyping, and non-negotiability.[i] Philosopher Joel Marks describes the defects of our typical sense of morality: it makes us angry; it promotes hypocrisy; it encourages arrogance; it is imprudent, leading us to do things that have obviously bad consequences; and it makes us intransigent, fueling endless strife.[ii]

Of these features, the worst is intransigence or non-negotiability, the refusal to entertain the possibility of coming to some reconciliation, compromise or agreement. Conflicts based on differing moral intuitions are notoriously difficult to resolve.

Why is this so? To find out, we need to take a close look at what morality is and what moral judgments are about. In this essay I discuss the ontology of morality; that is, how its manner of being is like and unlike that of other kinds of things we experience. I note a sort of impasse one can find oneself in once the ontological status of morality is recognized. Then I suggest a way out of the impasse: to think in terms of goodness rather than rightness.

According to psychologist Stephen Pinker, the moral judgment has specific cognitive, behavioral and emotional characteristics. Cognitively, the rules it evokes are taken to apply without exception. Prohibitions against rape and murder are believed to be universal and objective, not matters of local custom; and people who violate the rules are deemed to deserve condemnation. Behaviorally, we do in fact condemn moral offenders and praise those who obey the moral law in ways that do not apply to, for instance, people who merely wear unstylish clothes. Emotionally, when our sense of morality is triggered, we feel a glow of righteousness when we abide by the rules, guilt when we don’t, a sense of anger or resentment at those who violate the rules and a desire to recruit others to allegiance to them.[iii] (This account of moral judgment, by the way, is just a description. It does not itself make any moral claims.)

What is philosophically interesting is the nature of the moral rules. What sorts of things are they, and how do we know them? These are questions of ontology, the study of what exists, and epistemology, the study of how we come to know things. The two questions are closely related, of course, as the way we know things determines what we believe about what they are. My epistemological approach is loosely phenomenological in the Continental sense. In what follows I examine everyday experience of various kinds of entities without prejudging the status of their existence in order to find out how they appear to us. Metaphorically, at the risk of attributing agency where there is none, I investigate how they make themselves known to us. From the results of that inquiry I make judgments about their ontology. I follow Hans Jonas in thinking of ontology as the “manner of being” characteristic of various kinds of entities.[iv]

Most people, I suspect, especially those who intransigently insist that their morality is the right one, are moral realists. Moral realism is the doctrine that there are moral facts, expressible in propositions like “Murder is wrong,” that exist whether or not anyone believes they do. They are taken to be objective and independent of our perception of them and of our beliefs, feelings and attitudes towards them. In this view, if someone asks “Is murder wrong?” there is a correct answer because there really is, out there in the world, a fact of the matter.

But is there? The opposing view, with the somewhat unintuitive name “moral anti-realism,” says there is not. To see why someone might suspect that there are actually no moral facts out there in the world, we can contrast the manners of being of three different kinds of things, physical entities, mathematical/logical entities and moral entities.

We take physical entities to exist independently of us because of how they appear to us and how they behave when we interact with them. (I speak here of physical things of middling size in the everyday world, not the very tiny things of the quantum scale, nor those that are astronomically large.) Things in our ordinary experience appear in perspective. We see one side of an object, a tree, say, but not the other side. We fully expect that if we walk around the tree we will see its other side, and in fact when we walk around it, we do see its other side. If we try to occupy the same space as the tree by walking through it, we find that we can’t. A physical object occupies space and has a certain mass. If moving, it has a certain velocity (with respect to our frame of reference) and perhaps a certain acceleration. Physical objects appear in color, or at least in shades of dark and light. They persist. If we turn our back to the tree or close our eyes, we fully expect to be able to see it if we turn around or open our eyes, and our expectations are fulfilled. Physical objects change over time, and we can predict the changes well enough to take advantage of them, knowing, for instance, the best time to pick fruit from the tree. Physical objects are knowable by more than one person. We can measure the tree’s height and the circumference of its trunk, and anyone else using the same instruments will come up with the same measurements. For all these reasons it makes abundant sense to believe that physical objects exist in their own right, independently of us.

Mathematical/logical entities seem to exist independently of us as well, although they do so differently from physical objects. In contrast to physical objects, they have no perspective, no front and back. They have no mass, do not occupy space and have no velocity, acceleration or color. Unlike physical objects, which change over time, mathematical/logical objects do not. The number three is now, was always and always will be a prime number. But, like physical objects, mathematical/logical objects persist. Whenever we think of them they appear to us just as they did before, somewhat as a tree does when we open our eyes after closing them. And there are established procedures for investigating them, just as there are for physical objects. If someone proves a mathematical theorem, anyone with the requisite knowledge can verify that the proof is correct.

There is quite a philosophical controversy over the exact ontological status of mathematical and logical entities. Do they exist independently of us, or do they depend on us for their existence? Do we discover them, or do we in some sense construct them? I am very much simplifying the debate between Platonism and Nominalism here; the arguments can get very technical and arcane. But it is evident that some things certainly seem like facts: that two plus two equals four, that true premises of a valid argument yield a true conclusion, that an equilateral triangle is also equiangular, and so forth. The reality of these things does not depend on whether we believe in them or not, nor on how we feel about them. If we somehow construct them, we do so within very rigid logical constraints; there is only one possible way for each of them to be. And where does that logical constraint come from? Do we construct it? I find it more reasonable to believe that, like physical objects, mathematical/logical objects exist independently of us.

Moral entities such as the wrongness of murder or the obligation to tell the truth are different. They are neither physical nor mathematical/logical, but have characteristics of both. Like mathematical/logical entities and unlike physical objects, they lack perspective, mass, extension in space, velocity, acceleration and color. Like both mathematical and physical objects, they persist in time. If someone thinks murder is wrong today, he or she will most likely think it wrong tomorrow. Like physical objects, moral entities seem to change over time. Slavery was common and accepted in ancient Greece and Rome; today we find it morally wrong. But does that mean that the moral status of slavery has actually changed over the years, or was it always wrong and it has taken us some time to recognize its wrongness?

The fact that we can ask this question should alert us that there is something a bit strange about moral entities. Physical objects change over time in accordance with well-known natural laws. Mathematical/logical entities don’t. But we don’t have an easy and obvious answer as to whether moral entities do or don’t. Not only that, we don’t have an agreed-upon way to find out. We use the scientific method of experimentation to learn about the physical world. We use formal methods to prove mathematical and logical theorems. In both cases, any competent practitioner can use the method to find the result, a result that is objective in that it is agreed upon by all those who use the method. Objective results can be evaluated in the same way independently of who the evaluator is. In contrast, there is no accepted procedure that enables us to settle moral debate. There is no experiment to determine, for example, whether abortion is or is not morally acceptable. This leads one to suspect that moral entities do not exist objectively and independently of us as physical objects do.

There are other reasons to question the independent existence of moral entities. The late J.L. Mackie calls one of them the argument from relativity. It is an obvious fact that moral codes vary among societies and even among various classes and groups within a single society, as illustrated by the examples given above. Mackie takes these differences as evidence that different moral codes reflect different ways of life, not different apprehensions, “most of them seriously inadequate and badly distorted,” of an objective realm of moral entities.[v]

Mackie also offers the argument from queerness (by which term he means being odd or unusual, not sexual orientation). The argument from queerness, Mackie says,

has two parts, one metaphysical, the other epistemological. If there were objective values, then they would be entities or qualities or relations of a very strange sort, utterly different from anything else in the universe. Correspondingly, if we were aware of them, it would have to be by some special faculty of moral perception or intuition, utterly different from our ordinary ways of knowing everything else.[vi]

Ontologically, moral entities as we experience them do in fact seem to be different from physical and mathematical/logical entities. In addition to the points made above, there is another way they differ: they intrinsically motivate us to act. This assertion, technically known as “motivational internalism,” is not uncontroversial. Internalists believe that there is a logically necessary connection between one’s conviction that something ought to be done and one’s motivation to do it. Externalists deny this assertion and say that an independent desire, such as the desire to do the right thing, is required to motivate us. Rather than argue about concepts, I just want to point out that, empirically, moral judgments do in fact motivate the vast majority of us most of the time.[vii] We find a wallet with money in it and some papers identifying its owner. We know that morally we ought to return the money to the owner and feel some inclination to do so. Even if we keep the money, we feel the obligation, the impulse to do the right thing, and have to make some effort to overcome it.[viii]

In contrast, physical objects and mathematical/logical entities do not motivate us. A tree may be ripe with apples, but we are motivated to pick them not because they are there but because we feel hungry or think it would be nice to make an apple pie or in order to sell them or for some other reason that is intrinsic to us, not to the apples. We may enjoy the beauty of an elegant logical proof, but it does not motivate us to do anything about it unless we have, for instance, some curiosity about its further implications. The curiosity is ours, not the proof’s.

So moral entities do indeed seem to be queer in Mackie’s sense. They are not real in the familiar way that physical objects are, nor in the way that mathematical/logical entities are. They have some characteristics of both and one characteristic, that they inherently motivate us, shared by neither. If moral realism means to be real in the manner of physical objects or of mathematical/logical entities, then moral realism is false and moral anti-realism, true.

But that’s not the whole story. There is another way to be real.

As a way of approaching this other way to be real, consider the epistemological aspect of Mackie’s argument from queerness. He says that to apprehend moral entities that exist independently of us, we would need some special faculty of moral perception or intuition; and he thinks we have no such faculty. But actually, we do.

Philosophers have long debated the rational basis for moral judgments, but in fact most of our moral judgments are not made rationally. They are not carefully thought out; instead, they are the result of intuition. Jonathan Haidt and other researchers in social psychology have found that we humans are equipped, presumably from evolutionary adaptation to living in groups, with instincts for morals, a moral sense that is built into all of us except, perhaps, psychopaths.[ix] Most moral judgments are not the result of conscious deliberation. Instead, they are snap judgments made instantly and automatically. People rely on gut reactions to tell right from wrong and then employ reason afterwards to justify their intuitions. Intuitions, says Haidt, are “the judgments, solutions, and ideas that pop into consciousness without our being aware of the mental processes that led to them.” Moral intuitions are a subset: “Feelings of approval or disapproval pop into awareness as we see or hear about something someone did, or as we consider choices for ourselves.”[x] Feelings of approval and disapproval are cloaked in emotions such as delight, esteem and admiration or anger, contempt and disgust, and each of these motivates us to actions such as praise or blame. The moral sense is analogous to our capacity for language. All humans are able to learn and use language, but different cultures have different languages. Similarly, all humans have a sense of morality that manifests itself in moral intuitions. The details of what is morally approved and disapproved, however, vary from culture to culture, and that is where we find moral conflicts.

Let’s look carefully at an example of making such an intuitive moral judgment. Suppose you came across a person beating a dog. You would, if you are like many people in relatively affluent and polite Western societies, feel revulsion and disapproval. You would feel some impulse to try to get the person to stop; you would feel justified in telling the person to stop, perhaps even obligated to do so; and if asked about it, you would say that beating the dog is wrong. If asked about it further, you would cite a rule to the effect that inflicting needless harm on sentient creatures is morally forbidden.

There is a certain structure to this scenario, a way of describing it that Aristotle would call an explanation in terms of form. The structure is this:

  • There is an action going on out in the public world, an action that anyone can see: the person beating the dog.
  • You have your reaction of moral disgust, with its cognitive, affective and behavioral components. Cognitively you ascribe wrongness to the action. In your view, beating a dog counts as something wrong, something one should not do.
  • Your ascription of wrongness is an instance of a more generalized rule or system of rules to which you can refer in cooler moments, such as “Harming sentient beings needlessly is wrong,” a rule that is shared among others of your society and social class. (But it might not be shared among people of a different society or social class.)

More succinctly, beating a dog counts as wrong in the context of a generally accepted rule constituting it as wrong. Abstracting from the particulars, we can describe the structure of this scenario as “X counts as Y in context C.” Here X stands for the beating of the dog, Y stands for being wrong, and C stands for the general rule, accepted by members of your social class, to avoid needless harm.

That structure, “X counts as Y in context C,” is exactly the structure that philosopher John Searle identifies as the structure of institutional facts, facts that exist only by virtue of collective agreement or acceptance.[xi] Institutional facts are socially constructed, and there are quite a number of them. Searle mentions money, property, marriages, governments, tools, restaurants, schools and many others. They exist only because we believe them to exist, and Searle’s aim is to account for their ontology. To exist only because we believe in them sounds paradoxical. Are they like Tinker Bell? If we quit believing, would they stop existing? If so, why do we believe in them in the first place? But actually, their ontology can be rationally accounted for.

An institutional fact can be described in physical terms, but to describe only the physical aspect misses its essence. Take, for example, money. We take bits of paper with certain markings on them to be media of exchange and stores of value. Historically people have taken many different kinds of things to be money: shells, beads, coins, pieces of paper, bits of data in computer systems. But these things are not money by virtue of their physical properties. Their physical properties alone do not enable them to be used as money, even in the case of precious metals. They are money only because human beings use them as money, accept their use as money and have rules that govern their use as money.[xii] The rules actually constitute money. They do not regulate some preexisting use of bits of material; the use of certain bits of material as money is possible only in the context of the rules. The rules governing money are more like the rules of chess than rules regulating which side of the road to drive on. They create the very possibility of using money to buy and sell things.[xiii]

Searle goes into a great deal of detail about the logical structure of socially constructed facts (logical because language is an essential element in their construction and logic is a feature of language), which need not concern us here. I want only to point out the similarities between his account of such facts and morality.

  • Socially constructed facts are not physical. The markings on a US five-dollar bill are physical, but the fact that it is money is not. Similarly, moral rules are not physical.
  • Socially constructed facts are not mathematical or logical. It is not logically necessary that the piece of paper with five-dollar markings on it be money. It could without contradiction fail to be regarded as money. Similarly, moral rules are not mathematical/logical entities.
  • Socially constructed facts persist in time. The five-dollar bill has been used as money for some time, and we expect its use to continue. Similarly, moral entities persist in time.
  • Socially constructed facts can change over time and space. An 11th-century Chinese bank note is not money today, although we can recognize that it used to be money. A US five-dollar bill is not legal tender in most other countries today, even if it is known to be money in the US. Similarly, moral rules change over time and vary from place to place.
  • Socially constructed facts have normative implications. Searle notes that social institutions such as marriages, property and money entail institutional forms of powers, rights, obligations and duties. These are things that give us reason to act that are independent of whether we are inclined to do so or not.[xiv] Similarly, moral entities do in fact motivate us to action regardless of our inclination otherwise.
  • Socially constructed facts have functions that the underlying physical facts do not. These functions are part of the definition of the social facts. The status of bits of paper as money implies their function as media of exchange. That’s what it means to be money.[xv] Similarly, moral norms have functions. Among members of a society they promote and regulate social cooperation. Within each person they promote order among potentially conflicting motivations, thereby encouraging that person to be a constructive participant in the cooperative life.[xvi]
  • Socially constructed facts have the structure “X counts as Y in context C.” So do moral evaluations of particular actions and of types of action.

Based on these considerations it seems reasonable to say that the manner of being of moral entities is to be socially constructed. They exist independently of any particular person, but they are not independent of conscious agents altogether as physical and (arguably) mathematical/logical entities are. Moral entities are socially constructed within a community of practice, a social group, a culture or a society. Within such a community or society, everybody agrees (more or less) on what they are, everybody treats them the same way and everybody acts as if they are real. Just as there are consequences for the way we deal with physical objects, there are real consequences for the way we abide by moral rules or not, namely the reactions of others in the community. So, for members of such a community they are real. The ontological status of morality is that it is a socially constructed reality.

Is this conclusion morally realist or anti-realist? As with many conceptual issues, it depends on definitions of terms. If “realism” means to be real as physical entities are, then it is anti-realist. If “realism” means to be real in any fashion at all, then it is realist. More important is what it tells us about the source of moral conflict. Moral systems vary among societies, but each society takes its morality to apply to all people universally. Hence, nobody wants to compromise. What our conclusion does not tell us is what to do about such conflict. For that, we need some more consideration.

(To be continued.)


[i] Maiese, “Moral or Value Conflicts.”

[ii] Marks, Ethics without Morals, pp. 40-48.

[iii] Pinker, “The Moral Instinct.”

[iv] Jonas, Mortality and Morality, p. 88.

[v] Mackie, Ethics, p. 37.

[vi] Ibid., p. 38.

[vii] For an account of why this is so based on empirical research see Prinz, “The Emotional Basis of Moral Judgments.”

[viii] This example is specific to a certain culture and a socioeconomic class within that culture, but similar examples obtain mutatis mutandis in other cultures and classes.

[ix] Haidt, The Righteous Mind, pp. 123–127 and pp. 170–176. Haidt and Joseph, “The Moral Mind.” Haidt and Joseph, “Intuitive Ethics.” Pinker, “The Moral Instinct.”

[x] Haidt and Joseph, “Intuitive Ethics,” p. 56.

[xi] Searle, The Construction of Social Reality, pp. 2, 28, 43-45.

[xii] Ibid., pp. 41-45.

[xiii] Ibid., pp. 27-28.

[xiv] Ibid., p. 70.

[xv] Ibid., p. 114.

[xvi] Wong, “Making An Effort To Understand,” p. 13.


Haidt, Jonathan. The Righteous Mind: Why Good People Are Divided by Politics and Religion. New York: Pantheon Books, 2012.

Haidt, Jonathan, and Craig Joseph. “Intuitive ethics: How Innately Prepared Intuitions Generate Culturally Variable Virtues.” Daedalus, Fall 2004, Vol. 133, No. 4, pp. 55–66. Online publication target=”_blank” as of 12 September 2017.

Haidt, Jonathan, and Craig Joseph. “The Moral Mind: How Five Sets of Innate Intuitions Guide the Development of Many Culture-Specific Virtues, and Perhaps Even Modules.” Carruthers, Peter, et al., Eds. The Innate Mind, Volume 3, pp. 367-391. New York: Oxford Press, 2007. Online publication as of 12 September 2017.

Jonas, Hans. Mortality and Morality: A Search for the Good after Auschwitz. Ed. Lawrence Vogel. Evanston, Illinois: Northwestern University Press, 1996.

Mackie, J.L. Ethics: Inventing Right and Wrong. London and New York: Penguin Books, 1977.

Maiese, Michelle. “Moral or Value Conflicts.” Beyond Intractability. Ed. Guy Burgess and Heidi Burgess. Conflict Research Consortium, University of Colorado, Boulder, Colorado, USA. Online publication as of 6 July 2017.

Marks, Joel. Ethics without Morals: In Defense of Amorality. New York and London: Routledge, 2013.

Pinker, Stephen. “The Moral Instinct.” New York Times, January 13, 2008. Online publication as of 13 January 2008.

Prinz, Jesse. “The Emotional Basis of Moral Judgments.” Philosophical Explorations, Vol. 9, No. 1, March 2006, pp. 29-43. Online publication as of 12 August 2017.

Searle, John R. The Construction of Social Reality. New York: The Free Press, 1995.

Wong, David. “Making An Effort To Understand.” Philosophy Now, Issue 82 (January/February 2011), pp. 10-13. London: Anya Publications, 2011. Online publication as of 12 April 2012.

Sep 13 18

Moral Hallowing Reevaluated

by Bill Meacham

After getting some feedback on my essay last time on Richard Beck’s notion of moral hallowing, I realize that I was a bit too harsh on him. A reader comments,

Just taking an intellectual position [of moral anti-realism] does not cause my underlying, social ape moralizing and politicking to stop, nor could it, in a real human with human psychology. I am not going to stop behaving as though or acting on my underlying, subjective belief that murder is wrong ….(1)

Right. All of us except for psychopaths have a sense of morality that we cannot simply reason away. The details of what conduct is prohibited, allowed and required by the moral code vary from culture to culture, but all cultures have one. Every culture has sets of rules, whether stated explicitly or not, that specify how people are to act. And people in every culture—which is to say all people, as we never find humans in isolation—have internalized the moral code of their culture and have a conscience, a sense of right and wrong. Most of our moral judgments are not made rationally. They are not carefully thought out; instead, they are they come as intuitions, which some call the voice of conscience.

By “intuitions” I mean rapid and automatic judgments. Psychologists Jonathan Haidt and Craig Joseph say that intuitions are “the judgments, solutions, and ideas that pop into consciousness without our being aware of the mental processes that led to them.” Moral intuitions are a subset: “Feelings of approval or disapproval pop into awareness as we see or hear about something someone did, or as we consider choices for ourselves.” Human beings “come equipped with an intuitive ethics, an innate preparedness to feel flashes of approval or disapproval toward certain patterns of events involving other human beings.”(2) Haidt explains that most people have more one than category of moral intuition: an urge to care for people and prevent harm, for instance, a concern for fairness, a respect for authority but also a revulsion toward those who dominate others, and more; and their relative degrees of influence vary from person to person.(3)

Beck says that “both Christians and atheists ground their ethics in metaphysics, in presupposed ‘oughts,’ basic norms taken as givens.”(4) In my essay last time I took him to mean that when people really think about it, they find that they can articulate what their basic norm is. I objected that after careful thought some people intellectually adopt moral anti-realism and recognize no basic norm. What I overlooked is that even such people can’t help feeling moral emotions and making intuitive moral judgments.

Consider Peter Singer, a moralist best known for his role as an intellectual founder of the animal rights movement. He is a stringent utilitarian, arguing that “we ought to be preventing as much suffering as we can”(5) and that physical proximity makes no difference in how much we are obligated to help someone. A needy child in East Bengal counts morally as much as one right next door.(6) But he has spent tens of thousands of dollars a year on care for one person, his mother(7), that could instead have fed several hundred children in Africa.(8) My point is not to blame Singer for his choice. I just want to point out that when it comes to morality, our intuitions, such as the urge to help one’s mother, often have more influence on our decisions than our intellectual positions.

This is a more charitable way to understand Beck’s claim that everyone grounds their ethics in metaphysics. Regardless of our intellectual position, we all have moral instincts, and we act on them. Beck’s talk of metaphysics makes the process sound more cerebral than it is. A great number of us don’t think through the implications of the norms we have enough to even question whether there is one that grounds them all. But if we do take a moment to reflect, we find that some things are indeed of overriding importance to us, in practice if not in theory.

So in that sense, Beck is right. Now the question is “So what?” What shall we do about the norms we rely on?

The first thing to note is that moral norms are not the only ones that influence our behavior. The norms we follow are not just our moral instincts, but our baser tendencies as well. The great majority of us fall prey at times to pride, envy, gluttony, lust, anger, greed and sloth, not to mention simple selfishness and discourtesy to others. Or we approach life in ways that are self-defeating, leading to dissatisfaction and unhappiness. Or both. In Christian terms, we sin. In secular terms, we succumb to akrasia, the vice of weakness of will. Lacking self-control, we act against our better judgment.

There is no shortage of advice as to what to do about such unfortunate circumstances. Christians advise us to repent and get right with God. Buddhists advise us to cultivate mindfulness and compassion. Stoics advise us to quit worrying about things we have no control over and make good choices about the ones we do. In my book I explain a number of ways we can take advantage of our uniquely human ability to think about our own thinking and avoid emotional rigidities that impair our ability to make good choices.

But what about the moral norms themselves, whether or not we live up to them? Once we get some clarity about the fact that we have such norms and what they are, we get to question them. What is their basis? Why should we follow them? We feel the obligation to do good, be fair, and so forth, but why should we? These are meta-ethical questions having to do with the ontology of morals. And that is a topic for another time.


(1) Lucas.

(2) Haidt and Joseph, “Intuitive Ethics,” p. 56.

(3) Haidt, The Righteous Mind, pp. 123–127 and 170–176.

(4) Beck, 29 August 2018.

(5) Singer, “Famine, Affluence, and Morality,” p. 238.

(6) Ibid., pp. 231-232.

(7) Specter, “The Dangerous Philosopher,” p. 55.

(8) Unite For Sight, “Fighting Hunger.”


[Beck, 29 August 2018] Beck, Richard. “Yet More On Moral Hallowing.” Online publication as of 30 August 2018.

Haidt, Jonathan. The Righteous Mind: Why Good People Are Divided by Politics and Religion. New York: Pantheon Books, 2012.

Haidt, Jonathan, and Joseph Craig. “Intuitive ethics: How Innately Prepared Intuitions Generate Culturally Variable Virtues.” Online publication available upon request from the author at as of 9 September 2018.

Lucas, Richard, comment on Meacham, “Moral Hallowing.” Original comment posted on Google Plus, reposted at

Meacham, Bill. How To Be An Excellent Human: Mysticism, Evolutionary Psychology and the Good Life. Austin, Texas: Earth Harmony, 2013. Available at

Singer, Peter. “Famine, Affluence, and Morality.” Philosophy & Public Affairs, Vol. 1, No. 3 (Spring, 1972), pp. 229-243. Online publication as of 12 April 2017.

Specter, Michael. “The Dangerous Philosopher.” The New Yorker, 6 September 1999, pp. 46-55. Online publication as of 8 September 2018.

Unite For Sight. “Fighting Hunger.” Online publication as of 10 September 2018.