Skip to content
Jan 11 19

On Consciousness (grumpy)

by Bill Meacham

I suppose my insistence on clarity of language about consciousness makes me a bit of a curmudgeon—or perhaps a bellyacher, crab, crank, grump or whiner—but I am appalled at some of the things people say about the topic. Here is an example:

Psychology professors Peter Halligan and David Oakley assert that being conscious is merely a byproduct of brain processes, a respectable position in philosophy of mind called Epiphenomenalism.(1) But when they try to say what they are talking about, all they do is repeat synonyms:

We all know what it is to be conscious. It is, basically, being aware of and responding to the world.

… while undeniably real, the “experience of consciousness” or subjective awareness is precisely that – awareness. No more, no less.

… subjective awareness [is] the intimate signature experience of what it is like to be conscious….(2)

So being conscious is being aware, being aware is having experience, and having experience is being conscious. These definitions are ridiculous. They are completely circular and shed no light on the subject. The problem is that “conscious” and “aware” are largely synonymous, which becomes apparent when you try to translate them into German or Spanish or Portuguese or any other language that has only one word where English has two. As Wittgenstein said, we are bewitched by our language.(3)

What should the authors have said instead? I have written a whole paper on the subject of how to speak about being conscious, which I’m told is fairly clear. Rather than summarize it, I urge you to read the paper itself.(4) In what follows I condense the authors’ argument and rephrase it in what I think is better terminology.

We all know what it is to be conscious. The world appears to us vividly, and we respond to it. The world includes public things such as trees and people and private things such as our thoughts and feelings. Some thoughts and feelings are conscious, meaning that they appear to us vividly and we can notice and focus on them. Others are less vivid; figuratively, they are in a sort of periphery. Some are so dim as to be not noticeable at all, and we call them unconscious. Here is a picture:

Many people think that we can control our conscious thoughts and feelings, and that they in turn can cause us to act in certain ways. But modern neuroscience tells us that that is not so.

The rest of the argument is clear enough in the authors’s own words:

There is now increasing agreement that most, if not all, of the contents of our psychological processes – our thoughts, beliefs, sensations, perceptions, emotions, intentions, actions and memories – are actually formed backstage by fast and efficient nonconscious brain systems. … Continuing to characterise psychological states in terms of being conscious and non-conscious is unhelpful.(5)

The authors conclude that conscious psychological processes and unconscious psychological processes are functionally the same; they are both caused by physical events in the brain. Whether they are conscious or not makes no difference in their causes or what they do. The only difference is that some are presented to us vividly enough that we notice and pay attention to them, and some aren’t.

That’s the argument. Whether it holds up or not is for another time. My only point in this essay is that it is quite possible to state the case in terms that are not circular and not ambiguous. Go forth and do likewise.


(1) Robinson, “Ephiphenomenalism.”

(2) Halligan and Oakley, “What if consciousness is just a product of our non-conscious brain?”

(3) Wittgenstein, Philosophical Investigations, §109.

(4) Meacham, “How to Talk About Subjectivity (Don’t Say ‘Consciousness’)”.

(5) Halligan and Oakley.


Halligan, Peter, and David A. Oakley. “What if consciousness is just a product of our non-conscious brain?” Online publication as of 9 January 2019.

Meacham, Bill. “How to Talk About Subjectivity (Don’t Say ‘Consciousness’)”. Online publication and as of 9 January 2019.

Robinson, William. “Epiphenomenalism.” The Stanford Encyclopedia of Philosophy (Fall 2015 Edition), Edward N. Zalta (ed.), Online publication as of 9 January 2019.

Wittgenstein, Ludwig. Philosophical Investigations, 3rd Edition. Tr. G.E.M. Anscombe. Oxford: Basil Blackwell, 1968 (1986). Online publication as of 25 October 2018.

Nov 21 18

The Game

by Bill Meacham

I recently learned of a game called The Game, the rules of which pose interesting philosophical questions. Here are the rules:(1)

  1. Everybody in the world who knows about The Game is playing The Game. A person cannot decline to play The Game; it does not require consent to play and you can never stop playing.
  2. Whenever you think about The Game, you lose.
  3. Losses must be announced. This can be verbally, with a phrase such as “I just lost The Game”, or in some other way, for example on social media or by holding up a sign.

OK, now you know about The Game and you are thinking about it. You lose. Sorry about that.

How do you feel about the assertion that you lose? Some typical reactions are curiosity, amusement, befuddlement, indifference and annoyance. I find The Game intellectually engaging and choose to address some of its philosophical and psychological issues. As you read this essay you will continually lose The Game, but don’t worry. There’s no penalty for doing so.

Is It a Game?

First, is The Game really a game? Some have called it a “mind virus.”(2) It is certainly a meme in Richard Dawkins’ original sense of a unit of cultural transmission.(3) But what does it have in common with other games, such as chess or football or ring-around-the-rosie? Here are some definitions of the term “game”:

A game is a structured form of play …. Key components of games are goals, rules, challenge, and interaction.”(4)

The Game has rules and interaction, but what is its goal? In many games the goal is to win, but it seems impossible to win The Game by deliberately trying to win.

A game is commonly defined as one or more players trying to achieve an objective ….”(5)

Again, what is the objective? It can’t be to win, as there is no way to do so. We have to look beyond the rules to the context in which The Game is played. Some people take the objective to be to infect as many people as possible with the mind virus. For others the objective seems to be simply to have fun with friends or potential friends and affirm a sense of community with them. At a comic book convention, for instance, or a science fiction convention or the like, someone may exclaim “Oh rats, I just lost The Game,” thereby provoking others to groan and admit that they lost it as well.

Ludwig Wittgenstein pondered the nature of games and asserted that there is no essence of game, nothing that uniquely identifies games. Instead, games bear a “family resemblance,” as he called it, to each other. They have a series of overlapping similarities, but no one feature is common to them all. Each game resembles at least one other, but no feature is common to all games and only games.(6) Given this approach, it is safe to say that The Game is indeed a game.


A crucial feature of The Game is that it is self-referential. Playing it requires some degree of second-order thinking, also called self-awareness or metacognition. You have to notice that you are thinking of The Game in order to announce that you have lost. Not only that, you do so ironically. You announce your loss as if dismayed, but you are not really dismayed. You actually kind of enjoy announcing it. Not only do you know that you have thought of The Game and thus lost, you also know that you don’t really mind losing, but you pretend you do. This capacity for self-awareness is the uniquely human virtue, what human beings do that other beings don’t or don’t do nearly so well.(7) Socrates said that you must know yourself in order to have a life worth living. The Game is one way humans have fun being human.

Ironic Process

The Game is a variant of what is called “ironic process,” whereby deliberate attempts to suppress certain thoughts make them more likely to surface.(8) The process is ironic because it produces an effect contrary to what is desired. You can try to win by not thinking about The Game, but that’s difficult. Fyodor Dostoevsky wrote “Try and set yourself the problem of not thinking about a polar bear and you will see that the damned animal will be constantly in your thoughts.”(9) Researchers have found that when we try not to think of something, one part of our mind does avoid the forbidden thought, but another part “checks in” every so often to make sure the thought is not coming up, therefore, ironically, bringing it to mind.(10)

Now, in fact there is a way to avoid thinking about a polar bear, and that is to think very hard of something else instead. Imagine a black bear or an elephant or some other animal. Imagine this animal dancing around. With sufficient focus, you can avoid thinking of the polar bear. No doubt it is a bit difficult and not something most of us do much, but we are not helpless before our thoughts. I once did it by repeating to myself over and over “There is something of which I must not think. There is something of which I must not think.” After a while I stopped and could not remember what it was! It came to me later, and now I have forgotten it altogether, but for a time I was successful.

The ability to focus your thoughts, to exert some control over them, is of profound importance. A Sufi mystic says,

He who does not direct his own mind lacks mastery. … If he does not control his mind, he is not a master but a slave. … Mastery lies not merely in stilling the mind, but in directing it towards whatever point we desire, in allowing it to be active as far as we wish, in using it to fulfill our purpose, in causing it to be still when we want to still it. He who has come to this has created his heaven within himself; he has no need to wait for a heaven in the hereafter, for he has produced it within his own mind now.(11)

Are You Playing?

Now here is a conundrum: If you know about The Game, and you think of it but don’t announce that you have lost, are you playing the game? Arguments can be made for both alternatives, that you are and that you aren’t.

Abstractly, if you think of The Game as a set of rules, the first of which is that you can’t refuse to play once you know about The Game, then you are indeed playing The Game when you know that you have thought of it, whether or not you announce your loss. There are different concrete scenarios in which this situation can play out.

Firstly, you might just forget. You might think of The Game—that is, it might idly occur to you, or you might hear someone mention it or you might think about it abstractly as I am doing in this essay—but forget that you thereby lose. In that case you are playing the game but not correctly.

Secondly, you might cheat. You cheat if you think of The Game and remember that you are supposed to announce your loss but don’t. You lie by omission. You might try to lie overtly and say that you have won The Game, but then everyone would know that you are grossly mistaken about the rules. To lie and not get caught, you have to remain silent. By remaining silent, you signal to others that you don’t know about The Game. (And if they aren’t thinking about The Game, they don’t even recognize your signal.) On this interpretation of when The Game is being played, remaining silent when you are supposed to announce your loss is playing The Game, but cheating at it. Do we say of someone who cheats that they aren’t playing the game? No, we say that they are playing, but not correctly. You participate in The Game by cheating. If you weren’t participating at all, you would have no thought of not participating and would not be cheating. By choosing to cheat, you participate in The Game.

On the other hand, you might refuse to announce your loss because you have decided not to play The Game. Perhaps you find it silly, or it once seemed like too much bother so you didn’t speak and now silence has become a habit, or you are just ornery and don’t want to play. One presenter at a convention got so angry at people interrupting the proceedings with their announcements that they had lost The Game that he made something of a crusade of opposing it.(12) Are you playing The Game if you deliberately decide not to? A case can be made that in that case you are not playing.

The first rule of The Game is that you can’t avoid playing, so even if you decide you don’t want to, you can’t help it. But who is to say that you have to obey the first rule? What if we say that to play The Game you have to obey all the rules? In that case by not obeying the first rule, you avoid playing The Game! How could we justify the rule that you must obey all the rules? Is that one a rule of The Game? You can’t justify it by appealing to a further rule, as doing so would get you into an infinite regress: you could only justify the further rule by a yet further one, and so on ad infinitum.(13)

Wittgenstein would say that the only way to justify having to play by the rules is by appeal to the practices of the players, their customs and their uses of the game.(14)

The Game is a social construct, played with others. By not interacting in the prescribed manner, you don’t participate in it. By your silence you avoid playing. If others announce that they have lost The Game and you don’t, and they have reason to believe that you know about The Game, then they know that you are refusing to play. They know (or are convinced or strongly suspect) that the idea of The Game has come to your mind, and they can see that you have not announced your loss, so they are justified in believing that you are deliberately not playing. (But of course in that case, they might just decide that they don’t want to play with you either and go off without you. Maybe you should play in order to avoid missing out on further fun.)

It seems clear on this view that you are not playing the game. But the other players might say that you are too playing the game, and you are just deluded into thinking you are not. So maybe it is not so clear after all.

Now, which argument is stronger, the one that says you are playing The Game when you don’t announce your loss or the one that says you aren’t? There seems to be no clear answer. The argument is about the meaning of concepts and how to apply them, a favorite topic among philosophers. Let’s apply the pragmatic method of William James, who in common with Wittgenstein aimed at cutting through conceptual confusion. James says, “The pragmatic method … is to try to interpret each notion by tracing its respective practical consequences. What difference would it practically make to any one if this notion rather than that notion were true?”(15) The practical consequence of saying that you are playing the game is to affirm the solidarity of the community of players. The practical consequence of saying that you are not is to affirm the freedom of the individual. The answer depends on your point of view and your desired outcome. Beyond that, dispute is idle. But idle dispute is not useless. The advantage of such an undecidable question is that it enables those who enjoy discussion to keep talking. They get to keep playing the philosophy game.


Well, who would have thought there was so much to say? Is there a point to all this? No, it’s just a game.



(1) Wikipedia, “The Game (mind game).” Another formulation of the first rule is that everyone in the world is playing The Game, but I don’t see how you can play a game you never heard of.

(2) Haywood, “Lose The Game.”

(3) Dawkins, The Selfish Gene, p. 192.

(4) Wikipedia, “Game.”

(5) Haywood, “Lose The Game.”

(6) Wittgenstein, Philosophical Investigations, §65-71.

(7) Meacham, How To Be An Excellent Human, chapter 20.

(8) Wikipedia, “Ironic process theory.”

(9) Dostoevsky, Winter Notes on Summer Impressions, p. 62.

(10) Winerman, Lea, “Suppressing the ‘white bears’.”

(11) Khan, “Stilling The Mind,” pp. 126-127. The author wrote before there were efforts to remove gender discrimination from common usage. Out of respect for historical sources, I have left the language as it was originally given and offer sincere apologies to any who feel alienated or offended by the choice of words. Certainly the author intended to include everyone.

(12) Dorn, “Finding Five Dollars.”

(13) Carroll, Lewis, “What the Tortoise Said to Achilles.”

(14) Wittgenstein, Philosophical Investigations, §197-202.

(15) James, William, “What Pragmatism Means,” p. 142.



Top: as of 16 November 2018. San Diego Comic-Con 2008 day 1. The person pictured is Raven Myle Aurora. Photo by Jason Mouratides from Portland, Oregon, USA. CC BY 2.0.

Middle: as of 16 November 2018.

Bottom: as of 16 November 2018. Text: “I’m as surprised as you! I didn’t think it was possible.”



Carroll, Lewis. “What the Tortoise Said to Achilles.” Mind 4, No. 14 (April 1895): 278-280. Online publication as of 12 November 2018.

Dawkins, Richard. The Selfish Gene. New York: Oxford University Press, 1976.

Dorn, Trae. “Finding Five Dollars (Why ‘The Game’ is Dumb).” Online publication as of 16 November 2018.

Dostoevsky, Fyodor. Winter Notes on Summer Impressions. Tr. Kyril FitzLyon. London: Quartet Books, 1985.

Haywood, Jonty, et. al. “Lose The Game – The World’s Most Infamous Mind Virus.” Online publication as of 16 November 2018.

James, William. “What Pragmatism Means.” In Essays In Pragmatism. Ed. Aubrey Castell. New York: Hafner Publishing Co., 1948 (1961). Online publication as of 16 November 2018.

Khan, Inayat. “Stilling The Mind.” In The Sufi Message of Hazrat Inayat Khan, Volume VII, In An Eastern Rose Garden. London: Barrie and Jenkins, 1973. Online publication as of 16 November 2018.

Know Your Meme. “The Game.” Online publication as of 6 November 2018.

Meacham, Bill. How To Be An Excellent Human: Mysticism, Evolutionary Psychology and the Good Life. Austin, Texas: Earth Harmony, 2013. Available at

Wikipedia. “Game.” Online publication as of 16 November 2018.

Wikipedia. “Ironic process theory.” Online publication as of 16 November 2018.

Wikipedia. “The Game (mind game).” Online publication as of 15 November 2018.

Winerman, Lea. “Suppressing the ‘white bears’.” American Psychological Association Monitor on Psychology. October 2011, Vol 42, No. 9, page 44. Online publication as of 17 November 2018.

Wittgenstein, Ludwig. Philosophical Investigations, 3rd Edition. Tr. G.E.M. Anscombe. Oxford: Basil Blackwell, 1968 (1986). Online publication as of 25 October 2018.

Nov 5 18

Reassessing Morality Part 2

by Bill Meacham

(This is the second of a two-part series. In the first part I argued that morality is best conceived as a socially constructed reality.)

Part II: The Practice of Morality

When we recognize the socially constructed status of moral rules, responsibilities, obligations, prohibitions and the like we may find ourselves in a bit of a quandary: what to do with our new understanding. We understand that these things do not, in fact, apply universally. Now we have a choice: shall we take them to apply to us? We could, it seems, just ignore them, or ignore the ones we don’t like. But on what basis would it be rational to ignore them, and which ones?

Morality of some sort is necessary for human existence, for we cannot live without others of our kind. Zoologists classify the human species as “obligatorily gregarious.”[1] We must have ongoing and extensive contact with our fellows in order to survive and thrive, and morality governs those interactions. Suppose we wanted to devise a moral system for universal use. On what rational basis could we choose the rules of that system?

It is theoretically possible to opt out of socially constructed reality in a way that we cannot opt out of physical and mathematical/logical reality. If everybody by some magical trick stopped believing in physical reality, it would assert itself anyway. Even if we believed we could, we could not in fact walk through a tree. The same goes for mathematical/logical reality. The square root of nine would still be three even if nobody believed it. But if everyone stopped believing in money, we would have no money. We would have only bits of paper and metal. Similarly, it seems that we could opt out of morality, although doing so would be quite difficult.

It would be difficult because socially constructed reality is not merely fictional; it is, in its own way, real. Powerful evolutionary forces have instilled in us a sense of morality; we can’t just wish it away. Moral entities, and institutional facts in general, have a peculiar nature: they compel our behavior even though we, in a sense, just make them up. They compel our behavior because they seem really to be there. Approaching the issue not analytically but from the point of view of a member of society, sociologists Peter Berger and Thomas Luckmann observe that institutional facts are “experienced as possessing a reality of their own, a reality that confronts the individual as an external and coercive fact.” The social world appears to each of us “in a manner analogous to the reality of the natural world … as an objective world.”[2] The socially constructed entities may exist only because we believe they do, but we believe they exist because they seem really to be there. And, for most of us, they continue to seem really to be there even after we recognize their socially constructed nature, much as an optical illusion still fools us even when we know that it is only an illusion.

It is no small thing to be an institutional fact. To minimize the importance of morality by saying that it is “just” socially constructed is to overlook its emotional and motivational force on us. You can remove yourself from some institutions, e.g. marriage, but to do so you generally need to do it with other people. In other words, you create an alternative social institution. Some communes may try to do away with money, but most of them have to interact with the outside world, which forces them to deal with money anyway. And yet, recognizing the socially constructed nature of morality opens a possibility that was not apparent to us before.

Before we think about it much, we treat moral rules as constraining our conduct because we take them for granted. Their socially constructed character is invisible to us, largely because our acceptance of them is not something we do deliberately. We are taught the moral rules by parents, elders and educators in our society. Just as we take money, marriage, government, property and the myriad other institutional facts as real, so we take moral rules as objectively real. We question them only when cracks in the structure of our social reality confront us, as illustrated by moral conflicts such as those mentioned in Part I. And many of us don’t even question them then.

But for those who do, a sort of spell is broken. Intellectually, we do not see our world the same way as before; we are no longer taken in by moral reality. Once we understand that morality is socially constructed, we have the freedom to buy into it or not. We are able to choose, within the constraints of our emotional and social conditioning, which duties to obey. This freedom can seem like a burden because emotionally we still feel the force of these moral intuitions. We may know intellectually that it is not always wrong to steal things, but we still cringe a bit at the thought of doing so.

Philosophically, the question of whether to obey certain moral rules and not others or to include certain ones but not others in a deliberately constructed moral system cannot be answered in the context of the moral rules in question, because to do so would be already to assume the answer. We need some other way to resolve the issue. The resolution can come by recognizing a further fact about rules for behavior: they are not all socially constructed.

Moral rules are socially constructed, but other rules are not: prudential or practical rules variously called “maxims,” “policies,” “rules of thumb” and the like. We do not have to evaluate our actions in terms of moral rightness and wrongness; we can instead evaluate them in terms of the benefits or harms of their consequences. Moral rightness is socially constructed. The effects of our actions are not.

Morality and Prudence: Rightness and Goodness

Morality and prudence are two ways of thinking about ethics. (By “ethics” I mean the evaluation of conduct generally. Morality and prudence are subsets of ethics.) Prudence is the exercise of rationality to promote one’s own interests. To act prudently is to act wisely and rationally in order to achieve one’s goals. I want to use the term “prudence” is a slightly more extended sense, as one’s chosen goals might not always be in one’s actual interest.

To approach understanding the difference between morality and prudence, we can put the matter in linguistic terms. They are manifested as two clusters of concepts and language used to command or recommend specific actions or habits of character. We can call them rightness and goodness. The rightness paradigm recognizes that people live in groups that require organization and regulations, and frames values in terms of duty and conformance to rules. The goodness paradigm recognizes that people have desires and aspirations. It frames values in terms of what enables a being to achieve its ends. The right has to do with laws and rules; the good, with achievement of goals. Rightness and goodness are two alternative ways of organizing the whole field of ethics to carry out the tasks of evaluating conduct, both in particular cases and in general types.[3] Both judgments of rightness and wrongness and judgments of goodness and badness can apply to particular actions, to types of actions, and to the habits of conduct that make up a person’s character.

Morality exemplifies the rightness paradigm, which uses the terms “right” and “wrong” to evaluate conduct. Some synonyms for “right” are “proper,” “moral” and “permissible.” Some synonyms for “wrong” are “improper,” “immoral” and “impermissible.” Morality is not the only kind of rightness. Others are law, which consists of legal rules enforced by the threat of physical coercion, and etiquette, social rules enforced solely by praise and blame. It is obvious that law and etiquette are socially constructed. As we have seen, it is reasonable to believe that morality is too.

Prudence exemplifies the goodness paradigm. That paradigm uses the terms “good” and “bad” to evaluate not only conduct but also things, people, states of affairs, etc., as well as maxims or guidelines for conduct. Some synonyms for “good” are “helpful,” “nourishing,” “beneficial,” “useful” and “effective.” Some synonyms for “bad” are their opposites: “unhelpful,” “unhealthy,” “damaging,” “useless” and “ineffective.”

Something that benefits something or someone we call good for that thing or person. Such goodness may be instrumental or biological. Instrumentally, a hammer is good for pounding nails, and what is good for the hammer is what enables it to do so well. Biologically, air, water, and food are good for living beings.

To make sense, an instrumental usage requires reference to someone’s purpose or intention. Thus, a hammer is good for pounding nails, and you pound nails in order to build things such as furniture or housing. Your intention is to acquire the comfort and utility these things afford you. That is your goal, or end, and the good is what helps bring it about.

The biological usage does not require reference to purpose or intention. It is expressed in terms of health and well-being. That which nourishes a living thing is good for it. The good, in this sense, is that which enables a thing to function well, that is, to survive, thrive and reproduce. (The function of a living thing is, intrinsically, to survive and reproduce.[4] Living things also have functions external to themselves in their habitat or biosphere, such as to provide shelter or nutrients or other goods to other living things. Here I mean function in the intrinsic sense.)

The instrumental usage intersects the biological when we consider what is good for something that is itself good for a purpose or intention. For instance, keeping a hammer clean and sheltered from the elements is good for the hammer and enables the hammer to fulfil its instrumental function. In the instrumental sense as well, the good is that which enables a thing to function well.

If someone says something is good, you can always ask “Good for whom? Good for what and under what circumstances?” If someone says something is right, you can always ask “According to what rule?” The two domains of discourse really are separate, and it is not useful to mix them. Mixing them is a form of category error. That something has good effects does not make it right. That something is in accordance with a moral rule does not make it good.

(As a caveat, let me say that the advice to pay attention to language in this way is useful for the most part, but not universally. I am proposing a heuristic rule of thumb, a tactic for getting clarity, not an infallible recipe. Sometimes the term “good” is used in a moralistic way, and there are other meanings of the term “right,” as in the right answer to a question. We have to pay attention to what is being asserted, not just to the specific words. But by and large, the language used to assess conduct provides a good clue to the nature of the assessment.)

Rightness and goodness differ in social usage. Both moral rules and consideration of consequences are ways to say “should,” that is, ways to tell someone what he or she should do (or refrain from doing) or should have done, or to tell ourselves the same. Moral rules are called “deontic,” after a Greek word meaning duty. But the deontic is not the only type of “should.” Another type, expressed in terms of goodness, is prudential or practical. In deontic cases the “should” is a prescription or even a command. In the prudential/practical case it is a recommendation. The force of our prescription or recommendation depends on the category in which the “should” is presented.

In the case of a deontic moral “should” such as “Thou shalt not steal” (“should” being stated in its strongest form, “shall”), we feel justified in demanding that people obey the moral rule and blaming them if they don’t. The imperative has a sense of universality, that it applies to everyone.

(In the case of a legal “should” we may not only demand and blame, we may also punish the offender. In the case of a “should” of social etiquette, we may only blame, but generally not demand. Neither of these is universal; they apply only within a certain legal framework or in a certain segment of society.)

An example of a prudential/practical “should” is that for good health you should eat lots of vegetables. In this case we may not demand but may certainly advise adherence to such a “should.” And we may not blame or punish failure to comply but may say the choice is foolish. Unlike moral rules, prudential/practical advice is not always universal. In practice, it depends on context. Perhaps for a malnourished vegan eating lots of vegetables would not be good, and instead he or she should try some meat.

The importance of the distinction is this: Unlike moral rules, which are not subject to objective verification, the good is a feature of the natural world; it has to do with benefits, which are publicly observable. Prudential/practical judgments are objectively verifiable. We can do studies of the effects of diet on health, for instance, studies that provide factual evidence, so the recommendation to eat vegetables is not just someone’s opinion.

Recognizing the difference between goodness and rightness shows us a way out of the quandaries and discomfort that arise from recognizing that morality is socially constructed. And recognizing the difference also shows us a way out of intractable moral conflict. Instead of framing the issues in terms of rightness, we can frame them in terms of goodness. In other words, instead of commanding one to do the right thing, we can advise one to do what is good.

Two Questions

The advice to promote goodness raises two obvious questions: Goodness for whom? And why should we do what is good anyway? A full discussion is beyond the scope of this essay, but in general the answer to the first question is, goodness for as many people as possible, including the person acting, within the bounds of what is doable. The answer to the second question is that promoting goodness in this way benefits oneself.

The underlying principle, taken from the study of systems theory applied to ecosystems, is that an element of a system thrives when the system as a whole is healthy, and a system as a whole is healthy when its constituent elements thrive. Human beings are elements in a variety of systems, most notably systems of other people, or communities. If, in situations of conflict, we can find ways to benefit all concerned, then we ourselves will be benefited. If everyone is satisfied, then the solution will be likely to last, leading to further benefit for ourselves. Short-sighted egotistical selfishness is self-defeating. The advice to seek goodness for as many concerned as possible is a strategy based on enlightened self-interest.

By the way, the injunction to work for the greater good is not utilitarian. Utilitarianism is just another morality, defining what is right in a certain way, as the greatest good for the greatest number of people. We are not obliged to maximize the good in this way. Rather, doing so is just good advice for maximizing our own welfare.

I suppose one could ask why we should maximize our own welfare. Again, a full answer is beyond the scope of this essay, but in short we have no absolute obligation to do so. In fact, however, most people do want their own welfare. The imperative is hypothetical, not categorical: If you want to enhance your welfare, work for the good of all concerned. In the absence of a rationally compelling reason to obey any given moral rule, this principle is well suited to serve as ethical guidance.

Summary and Conclusion

We started this inquiry by noting that some conflicts, those based on differing moral intuitions, resist easy solution. People entrenched in their morality have no inclination to compromise with what they see as evil. Along the way we identified a quandary felt by thoughtful people who want to be rational: we do not recognize an obligation to act on moral intuitions when we perceive them as the social constructions that they are. But which moral intuitions shall we abide by, and which shall we discard? On what basis shall we make the decision? And we feel a further discomfort when we contemplate opting out of morality but find ourselves emotionally locked in.

The way out of these issues is to recognize that there is another whole set of criteria by which to judge actions, people, policies and so forth, a set variously called “prudential” or “practical” and referred to by the language of goodness, not rightness. We can decide to focus on goodness, on what works to promote welfare, instead of on what rigid rules insist.

To apply this advice to conflicts such as those listed above, we can ask the combatants to think about what would be beneficial for both parties. This requires some tact and diplomacy, of course, but it is worth a try. If both parties receive some benefit, a lasting peace is more likely than if one party wins and the other loses.

To apply this advice to personal moral quandaries, when we are trying to figure out what to do we can ask what good can come out of each choice, not what the right choice is.

To apply this to an approach to our conduct in general, to our character as persons, we can focus on what is beneficial as a general rule. We might want to be honest, for instance, not because of a commandment to avoid bearing false witness, but because doing so promotes harmony and good relations with others, which in turns benefit us.

Morality is certainly useful for maintaining social cohesion. Universality has its appeal, but to get a cross-cultural or universal set of moral values we would have to design it. We could more readily do so on the basis of what is good for people than on sectarian moral codes.

This essay began by listing some of the ill effects of moral conflict. Focusing on benefits for all concerned instead of on rigid morality ameliorates them. Working for the common good promotes flexibility, understanding, trust and honest communication. The first step is to frame issues in terms of goodness, not rightness. The second step is to seek the good for all concerned, not because it is our duty, but because doing so will benefit each of us in the long run.


[1] de Waal, Primates and Philosophers, p. 4.

[2] Berger and Luckmann, The Social Construction of Reality, pp. 76, 77.

[3] Edel, “Right and Good.”

[4] Foot, Natural Goodness, pp. 31-32.


Berger, Peter L. and Thomas Luckmann. The Social Construction of Reality: A Treatise in the Sociology of Knowledge. New York: Penguin Putnam Inc., 1966.

de Waal, Frans. Primates and Philosophers: How Morality Evolved. Princeton: Princeton University Press, 2006.

Edel Abraham, “Right and Good.” Dictionary of the History of Ideas. Ed.Philip P. Wiener. 1974 edition, Vol. IV, pp. 173-187. Online publication;;toc.depth=1;;brand=defaultas of 15 August 2017.

Foot, Philippa. Natural Goodness. Oxford: Oxford University Press, 2001.

Oct 12 18

Reassessing Morality

by Bill Meacham

(This is the first of a two-part series. The second part will come shortly, so stay tuned.)

Part I: The Ontology of Morality

One of the most intractable sources of conflict in human affairs is clashes of morality. No doubt there are plenty of other sources of conflict, such as resource scarcity, tribal animosity, sexual jealousy, emotional restimulation and more. But a great deal of conflict is based on differing moral intuitions. Here are a few examples:

  • A Taliban tribesman kills his daughter for taking an unsupervised walk with a young man. He thinks he was obliged to do so. We in the West consider this an appalling murder.
  • Some people want to ban all abortions, claiming that abortion is morally wrong because it is murder. Others claim that not only is abortion not murder but a woman’s right to determine the fate of her own body outweighs any other moral claim.
  • Political protesters think it is our moral duty to disobey laws that we find unjust. Their opponents think patriotism, loyalty to one’s nation and obedience to its laws, is supremely obligatory.
  • Animal rights advocates praise “no-kill” animal shelters that minimize euthanasia of unclaimed pets even as costs mount drastically. They think we have a moral obligation to avoid harm to animals. Others lament the diversion of resources that could—and, they say, should—be used to provide health and public safety services to human beings.

All these examples of moral conflict (and there are many more) show certain common features. Researcher Michelle Maiese lists five: misunderstanding, mistrust, strained and hostile communication, negative stereotyping, and non-negotiability.[i] Philosopher Joel Marks describes the defects of our typical sense of morality: it makes us angry; it promotes hypocrisy; it encourages arrogance; it is imprudent, leading us to do things that have obviously bad consequences; and it makes us intransigent, fueling endless strife.[ii]

Of these features, the worst is intransigence or non-negotiability, the refusal to entertain the possibility of coming to some reconciliation, compromise or agreement. Conflicts based on differing moral intuitions are notoriously difficult to resolve.

Why is this so? To find out, we need to take a close look at what morality is and what moral judgments are about. In this essay I discuss the ontology of morality; that is, how its manner of being is like and unlike that of other kinds of things we experience. I note a sort of impasse one can find oneself in once the ontological status of morality is recognized. Then I suggest a way out of the impasse: to think in terms of goodness rather than rightness.

According to psychologist Stephen Pinker, the moral judgment has specific cognitive, behavioral and emotional characteristics. Cognitively, the rules it evokes are taken to apply without exception. Prohibitions against rape and murder are believed to be universal and objective, not matters of local custom; and people who violate the rules are deemed to deserve condemnation. Behaviorally, we do in fact condemn moral offenders and praise those who obey the moral law in ways that do not apply to, for instance, people who merely wear unstylish clothes. Emotionally, when our sense of morality is triggered, we feel a glow of righteousness when we abide by the rules, guilt when we don’t, a sense of anger or resentment at those who violate the rules and a desire to recruit others to allegiance to them.[iii] (This account of moral judgment, by the way, is just a description. It does not itself make any moral claims.)

What is philosophically interesting is the nature of the moral rules. What sorts of things are they, and how do we know them? These are questions of ontology, the study of what exists, and epistemology, the study of how we come to know things. The two questions are closely related, of course, as the way we know things determines what we believe about what they are. My epistemological approach is loosely phenomenological in the Continental sense. In what follows I examine everyday experience of various kinds of entities without prejudging the status of their existence in order to find out how they appear to us. Metaphorically, at the risk of attributing agency where there is none, I investigate how they make themselves known to us. From the results of that inquiry I make judgments about their ontology. I follow Hans Jonas in thinking of ontology as the “manner of being” characteristic of various kinds of entities.[iv]

Most people, I suspect, especially those who intransigently insist that their morality is the right one, are moral realists. Moral realism is the doctrine that there are moral facts, expressible in propositions like “Murder is wrong,” that exist whether or not anyone believes they do. They are taken to be objective and independent of our perception of them and of our beliefs, feelings and attitudes towards them. In this view, if someone asks “Is murder wrong?” there is a correct answer because there really is, out there in the world, a fact of the matter.

But is there? The opposing view, with the somewhat unintuitive name “moral anti-realism,” says there is not. To see why someone might suspect that there are actually no moral facts out there in the world, we can contrast the manners of being of three different kinds of things, physical entities, mathematical/logical entities and moral entities.

We take physical entities to exist independently of us because of how they appear to us and how they behave when we interact with them. (I speak here of physical things of middling size in the everyday world, not the very tiny things of the quantum scale, nor those that are astronomically large.) Things in our ordinary experience appear in perspective. We see one side of an object, a tree, say, but not the other side. We fully expect that if we walk around the tree we will see its other side, and in fact when we walk around it, we do see its other side. If we try to occupy the same space as the tree by walking through it, we find that we can’t. A physical object occupies space and has a certain mass. If moving, it has a certain velocity (with respect to our frame of reference) and perhaps a certain acceleration. Physical objects appear in color, or at least in shades of dark and light. They persist. If we turn our back to the tree or close our eyes, we fully expect to be able to see it if we turn around or open our eyes, and our expectations are fulfilled. Physical objects change over time, and we can predict the changes well enough to take advantage of them, knowing, for instance, the best time to pick fruit from the tree. Physical objects are knowable by more than one person. We can measure the tree’s height and the circumference of its trunk, and anyone else using the same instruments will come up with the same measurements. For all these reasons it makes abundant sense to believe that physical objects exist in their own right, independently of us.

Mathematical/logical entities seem to exist independently of us as well, although they do so differently from physical objects. In contrast to physical objects, they have no perspective, no front and back. They have no mass, do not occupy space and have no velocity, acceleration or color. Unlike physical objects, which change over time, mathematical/logical objects do not. The number three is now, was always and always will be a prime number. But, like physical objects, mathematical/logical objects persist. Whenever we think of them they appear to us just as they did before, somewhat as a tree does when we open our eyes after closing them. And there are established procedures for investigating them, just as there are for physical objects. If someone proves a mathematical theorem, anyone with the requisite knowledge can verify that the proof is correct.

There is quite a philosophical controversy over the exact ontological status of mathematical and logical entities. Do they exist independently of us, or do they depend on us for their existence? Do we discover them, or do we in some sense construct them? I am very much simplifying the debate between Platonism and Nominalism here; the arguments can get very technical and arcane. But it is evident that some things certainly seem like facts: that two plus two equals four, that true premises of a valid argument yield a true conclusion, that an equilateral triangle is also equiangular, and so forth. The reality of these things does not depend on whether we believe in them or not, nor on how we feel about them. If we somehow construct them, we do so within very rigid logical constraints; there is only one possible way for each of them to be. And where does that logical constraint come from? Do we construct it? I find it more reasonable to believe that, like physical objects, mathematical/logical objects exist independently of us.

Moral entities such as the wrongness of murder or the obligation to tell the truth are different. They are neither physical nor mathematical/logical, but have characteristics of both. Like mathematical/logical entities and unlike physical objects, they lack perspective, mass, extension in space, velocity, acceleration and color. Like both mathematical and physical objects, they persist in time. If someone thinks murder is wrong today, he or she will most likely think it wrong tomorrow. Like physical objects, moral entities seem to change over time. Slavery was common and accepted in ancient Greece and Rome; today we find it morally wrong. But does that mean that the moral status of slavery has actually changed over the years, or was it always wrong and it has taken us some time to recognize its wrongness?

The fact that we can ask this question should alert us that there is something a bit strange about moral entities. Physical objects change over time in accordance with well-known natural laws. Mathematical/logical entities don’t. But we don’t have an easy and obvious answer as to whether moral entities do or don’t. Not only that, we don’t have an agreed-upon way to find out. We use the scientific method of experimentation to learn about the physical world. We use formal methods to prove mathematical and logical theorems. In both cases, any competent practitioner can use the method to find the result, a result that is objective in that it is agreed upon by all those who use the method. Objective results can be evaluated in the same way independently of who the evaluator is. In contrast, there is no accepted procedure that enables us to settle moral debate. There is no experiment to determine, for example, whether abortion is or is not morally acceptable. This leads one to suspect that moral entities do not exist objectively and independently of us as physical objects do.

There are other reasons to question the independent existence of moral entities. The late J.L. Mackie calls one of them the argument from relativity. It is an obvious fact that moral codes vary among societies and even among various classes and groups within a single society, as illustrated by the examples given above. Mackie takes these differences as evidence that different moral codes reflect different ways of life, not different apprehensions, “most of them seriously inadequate and badly distorted,” of an objective realm of moral entities.[v]

Mackie also offers the argument from queerness (by which term he means being odd or unusual, not sexual orientation). The argument from queerness, Mackie says,

has two parts, one metaphysical, the other epistemological. If there were objective values, then they would be entities or qualities or relations of a very strange sort, utterly different from anything else in the universe. Correspondingly, if we were aware of them, it would have to be by some special faculty of moral perception or intuition, utterly different from our ordinary ways of knowing everything else.[vi]

Ontologically, moral entities as we experience them do in fact seem to be different from physical and mathematical/logical entities. In addition to the points made above, there is another way they differ: they intrinsically motivate us to act. This assertion, technically known as “motivational internalism,” is not uncontroversial. Internalists believe that there is a logically necessary connection between one’s conviction that something ought to be done and one’s motivation to do it. Externalists deny this assertion and say that an independent desire, such as the desire to do the right thing, is required to motivate us. Rather than argue about concepts, I just want to point out that, empirically, moral judgments do in fact motivate the vast majority of us most of the time.[vii] We find a wallet with money in it and some papers identifying its owner. We know that morally we ought to return the money to the owner and feel some inclination to do so. Even if we keep the money, we feel the obligation, the impulse to do the right thing, and have to make some effort to overcome it.[viii]

In contrast, physical objects and mathematical/logical entities do not motivate us. A tree may be ripe with apples, but we are motivated to pick them not because they are there but because we feel hungry or think it would be nice to make an apple pie or in order to sell them or for some other reason that is intrinsic to us, not to the apples. We may enjoy the beauty of an elegant logical proof, but it does not motivate us to do anything about it unless we have, for instance, some curiosity about its further implications. The curiosity is ours, not the proof’s.

So moral entities do indeed seem to be queer in Mackie’s sense. They are not real in the familiar way that physical objects are, nor in the way that mathematical/logical entities are. They have some characteristics of both and one characteristic, that they inherently motivate us, shared by neither. If moral realism means to be real in the manner of physical objects or of mathematical/logical entities, then moral realism is false and moral anti-realism, true.

But that’s not the whole story. There is another way to be real.

As a way of approaching this other way to be real, consider the epistemological aspect of Mackie’s argument from queerness. He says that to apprehend moral entities that exist independently of us, we would need some special faculty of moral perception or intuition; and he thinks we have no such faculty. But actually, we do.

Philosophers have long debated the rational basis for moral judgments, but in fact most of our moral judgments are not made rationally. They are not carefully thought out; instead, they are the result of intuition. Jonathan Haidt and other researchers in social psychology have found that we humans are equipped, presumably from evolutionary adaptation to living in groups, with instincts for morals, a moral sense that is built into all of us except, perhaps, psychopaths.[ix] Most moral judgments are not the result of conscious deliberation. Instead, they are snap judgments made instantly and automatically. People rely on gut reactions to tell right from wrong and then employ reason afterwards to justify their intuitions. Intuitions, says Haidt, are “the judgments, solutions, and ideas that pop into consciousness without our being aware of the mental processes that led to them.” Moral intuitions are a subset: “Feelings of approval or disapproval pop into awareness as we see or hear about something someone did, or as we consider choices for ourselves.”[x] Feelings of approval and disapproval are cloaked in emotions such as delight, esteem and admiration or anger, contempt and disgust, and each of these motivates us to actions such as praise or blame. The moral sense is analogous to our capacity for language. All humans are able to learn and use language, but different cultures have different languages. Similarly, all humans have a sense of morality that manifests itself in moral intuitions. The details of what is morally approved and disapproved, however, vary from culture to culture, and that is where we find moral conflicts.

Let’s look carefully at an example of making such an intuitive moral judgment. Suppose you came across a person beating a dog. You would, if you are like many people in relatively affluent and polite Western societies, feel revulsion and disapproval. You would feel some impulse to try to get the person to stop; you would feel justified in telling the person to stop, perhaps even obligated to do so; and if asked about it, you would say that beating the dog is wrong. If asked about it further, you would cite a rule to the effect that inflicting needless harm on sentient creatures is morally forbidden.

There is a certain structure to this scenario, a way of describing it that Aristotle would call an explanation in terms of form. The structure is this:

  • There is an action going on out in the public world, an action that anyone can see: the person beating the dog.
  • You have your reaction of moral disgust, with its cognitive, affective and behavioral components. Cognitively you ascribe wrongness to the action. In your view, beating a dog counts as something wrong, something one should not do.
  • Your ascription of wrongness is an instance of a more generalized rule or system of rules to which you can refer in cooler moments, such as “Harming sentient beings needlessly is wrong,” a rule that is shared among others of your society and social class. (But it might not be shared among people of a different society or social class.)

More succinctly, beating a dog counts as wrong in the context of a generally accepted rule constituting it as wrong. Abstracting from the particulars, we can describe the structure of this scenario as “X counts as Y in context C.” Here X stands for the beating of the dog, Y stands for being wrong, and C stands for the general rule, accepted by members of your social class, to avoid needless harm.

That structure, “X counts as Y in context C,” is exactly the structure that philosopher John Searle identifies as the structure of institutional facts, facts that exist only by virtue of collective agreement or acceptance.[xi] Institutional facts are socially constructed, and there are quite a number of them. Searle mentions money, property, marriages, governments, tools, restaurants, schools and many others. They exist only because we believe them to exist, and Searle’s aim is to account for their ontology. To exist only because we believe in them sounds paradoxical. Are they like Tinker Bell? If we quit believing, would they stop existing? If so, why do we believe in them in the first place? But actually, their ontology can be rationally accounted for.

An institutional fact can be described in physical terms, but to describe only the physical aspect misses its essence. Take, for example, money. We take bits of paper with certain markings on them to be media of exchange and stores of value. Historically people have taken many different kinds of things to be money: shells, beads, coins, pieces of paper, bits of data in computer systems. But these things are not money by virtue of their physical properties. Their physical properties alone do not enable them to be used as money, even in the case of precious metals. They are money only because human beings use them as money, accept their use as money and have rules that govern their use as money.[xii] The rules actually constitute money. They do not regulate some preexisting use of bits of material; the use of certain bits of material as money is possible only in the context of the rules. The rules governing money are more like the rules of chess than rules regulating which side of the road to drive on. They create the very possibility of using money to buy and sell things.[xiii]

Searle goes into a great deal of detail about the logical structure of socially constructed facts (logical because language is an essential element in their construction and logic is a feature of language), which need not concern us here. I want only to point out the similarities between his account of such facts and morality.

  • Socially constructed facts are not physical. The markings on a US five-dollar bill are physical, but the fact that it is money is not. Similarly, moral rules are not physical.
  • Socially constructed facts are not mathematical or logical. It is not logically necessary that the piece of paper with five-dollar markings on it be money. It could without contradiction fail to be regarded as money. Similarly, moral rules are not mathematical/logical entities.
  • Socially constructed facts persist in time. The five-dollar bill has been used as money for some time, and we expect its use to continue. Similarly, moral entities persist in time.
  • Socially constructed facts can change over time and space. An 11th-century Chinese bank note is not money today, although we can recognize that it used to be money. A US five-dollar bill is not legal tender in most other countries today, even if it is known to be money in the US. Similarly, moral rules change over time and vary from place to place.
  • Socially constructed facts have normative implications. Searle notes that social institutions such as marriages, property and money entail institutional forms of powers, rights, obligations and duties. These are things that give us reason to act that are independent of whether we are inclined to do so or not.[xiv] Similarly, moral entities do in fact motivate us to action regardless of our inclination otherwise.
  • Socially constructed facts have functions that the underlying physical facts do not. These functions are part of the definition of the social facts. The status of bits of paper as money implies their function as media of exchange. That’s what it means to be money.[xv] Similarly, moral norms have functions. Among members of a society they promote and regulate social cooperation. Within each person they promote order among potentially conflicting motivations, thereby encouraging that person to be a constructive participant in the cooperative life.[xvi]
  • Socially constructed facts have the structure “X counts as Y in context C.” So do moral evaluations of particular actions and of types of action.

Based on these considerations it seems reasonable to say that the manner of being of moral entities is to be socially constructed. They exist independently of any particular person, but they are not independent of conscious agents altogether as physical and (arguably) mathematical/logical entities are. Moral entities are socially constructed within a community of practice, a social group, a culture or a society. Within such a community or society, everybody agrees (more or less) on what they are, everybody treats them the same way and everybody acts as if they are real. Just as there are consequences for the way we deal with physical objects, there are real consequences for the way we abide by moral rules or not, namely the reactions of others in the community. So, for members of such a community they are real. The ontological status of morality is that it is a socially constructed reality.

Is this conclusion morally realist or anti-realist? As with many conceptual issues, it depends on definitions of terms. If “realism” means to be real as physical entities are, then it is anti-realist. If “realism” means to be real in any fashion at all, then it is realist. More important is what it tells us about the source of moral conflict. Moral systems vary among societies, but each society takes its morality to apply to all people universally. Hence, nobody wants to compromise. What our conclusion does not tell us is what to do about such conflict. For that, we need some more consideration.

(To be continued.)


[i] Maiese, “Moral or Value Conflicts.”

[ii] Marks, Ethics without Morals, pp. 40-48.

[iii] Pinker, “The Moral Instinct.”

[iv] Jonas, Mortality and Morality, p. 88.

[v] Mackie, Ethics, p. 37.

[vi] Ibid., p. 38.

[vii] For an account of why this is so based on empirical research see Prinz, “The Emotional Basis of Moral Judgments.”

[viii] This example is specific to a certain culture and a socioeconomic class within that culture, but similar examples obtain mutatis mutandis in other cultures and classes.

[ix] Haidt, The Righteous Mind, pp. 123–127 and pp. 170–176. Haidt and Joseph, “The Moral Mind.” Haidt and Joseph, “Intuitive Ethics.” Pinker, “The Moral Instinct.”

[x] Haidt and Joseph, “Intuitive Ethics,” p. 56.

[xi] Searle, The Construction of Social Reality, pp. 2, 28, 43-45.

[xii] Ibid., pp. 41-45.

[xiii] Ibid., pp. 27-28.

[xiv] Ibid., p. 70.

[xv] Ibid., p. 114.

[xvi] Wong, “Making An Effort To Understand,” p. 13.


Haidt, Jonathan. The Righteous Mind: Why Good People Are Divided by Politics and Religion. New York: Pantheon Books, 2012.

Haidt, Jonathan, and Craig Joseph. “Intuitive ethics: How Innately Prepared Intuitions Generate Culturally Variable Virtues.” Daedalus, Fall 2004, Vol. 133, No. 4, pp. 55–66. Online publication target=”_blank” as of 12 September 2017.

Haidt, Jonathan, and Craig Joseph. “The Moral Mind: How Five Sets of Innate Intuitions Guide the Development of Many Culture-Specific Virtues, and Perhaps Even Modules.” Carruthers, Peter, et al., Eds. The Innate Mind, Volume 3, pp. 367-391. New York: Oxford Press, 2007. Online publication as of 12 September 2017.

Jonas, Hans. Mortality and Morality: A Search for the Good after Auschwitz. Ed. Lawrence Vogel. Evanston, Illinois: Northwestern University Press, 1996.

Mackie, J.L. Ethics: Inventing Right and Wrong. London and New York: Penguin Books, 1977.

Maiese, Michelle. “Moral or Value Conflicts.” Beyond Intractability. Ed. Guy Burgess and Heidi Burgess. Conflict Research Consortium, University of Colorado, Boulder, Colorado, USA. Online publication as of 6 July 2017.

Marks, Joel. Ethics without Morals: In Defense of Amorality. New York and London: Routledge, 2013.

Pinker, Stephen. “The Moral Instinct.” New York Times, January 13, 2008. Online publication as of 13 January 2008.

Prinz, Jesse. “The Emotional Basis of Moral Judgments.” Philosophical Explorations, Vol. 9, No. 1, March 2006, pp. 29-43. Online publication as of 12 August 2017.

Searle, John R. The Construction of Social Reality. New York: The Free Press, 1995.

Wong, David. “Making An Effort To Understand.” Philosophy Now, Issue 82 (January/February 2011), pp. 10-13. London: Anya Publications, 2011. Online publication as of 12 April 2012.

Sep 13 18

Moral Hallowing Reevaluated

by Bill Meacham

After getting some feedback on my essay last time on Richard Beck’s notion of moral hallowing, I realize that I was a bit too harsh on him. A reader comments,

Just taking an intellectual position [of moral anti-realism] does not cause my underlying, social ape moralizing and politicking to stop, nor could it, in a real human with human psychology. I am not going to stop behaving as though or acting on my underlying, subjective belief that murder is wrong ….(1)

Right. All of us except for psychopaths have a sense of morality that we cannot simply reason away. The details of what conduct is prohibited, allowed and required by the moral code vary from culture to culture, but all cultures have one. Every culture has sets of rules, whether stated explicitly or not, that specify how people are to act. And people in every culture—which is to say all people, as we never find humans in isolation—have internalized the moral code of their culture and have a conscience, a sense of right and wrong. Most of our moral judgments are not made rationally. They are not carefully thought out; instead, they are they come as intuitions, which some call the voice of conscience.

By “intuitions” I mean rapid and automatic judgments. Psychologists Jonathan Haidt and Craig Joseph say that intuitions are “the judgments, solutions, and ideas that pop into consciousness without our being aware of the mental processes that led to them.” Moral intuitions are a subset: “Feelings of approval or disapproval pop into awareness as we see or hear about something someone did, or as we consider choices for ourselves.” Human beings “come equipped with an intuitive ethics, an innate preparedness to feel flashes of approval or disapproval toward certain patterns of events involving other human beings.”(2) Haidt explains that most people have more one than category of moral intuition: an urge to care for people and prevent harm, for instance, a concern for fairness, a respect for authority but also a revulsion toward those who dominate others, and more; and their relative degrees of influence vary from person to person.(3)

Beck says that “both Christians and atheists ground their ethics in metaphysics, in presupposed ‘oughts,’ basic norms taken as givens.”(4) In my essay last time I took him to mean that when people really think about it, they find that they can articulate what their basic norm is. I objected that after careful thought some people intellectually adopt moral anti-realism and recognize no basic norm. What I overlooked is that even such people can’t help feeling moral emotions and making intuitive moral judgments.

Consider Peter Singer, a moralist best known for his role as an intellectual founder of the animal rights movement. He is a stringent utilitarian, arguing that “we ought to be preventing as much suffering as we can”(5) and that physical proximity makes no difference in how much we are obligated to help someone. A needy child in East Bengal counts morally as much as one right next door.(6) But he has spent tens of thousands of dollars a year on care for one person, his mother(7), that could instead have fed several hundred children in Africa.(8) My point is not to blame Singer for his choice. I just want to point out that when it comes to morality, our intuitions, such as the urge to help one’s mother, often have more influence on our decisions than our intellectual positions.

This is a more charitable way to understand Beck’s claim that everyone grounds their ethics in metaphysics. Regardless of our intellectual position, we all have moral instincts, and we act on them. Beck’s talk of metaphysics makes the process sound more cerebral than it is. A great number of us don’t think through the implications of the norms we have enough to even question whether there is one that grounds them all. But if we do take a moment to reflect, we find that some things are indeed of overriding importance to us, in practice if not in theory.

So in that sense, Beck is right. Now the question is “So what?” What shall we do about the norms we rely on?

The first thing to note is that moral norms are not the only ones that influence our behavior. The norms we follow are not just our moral instincts, but our baser tendencies as well. The great majority of us fall prey at times to pride, envy, gluttony, lust, anger, greed and sloth, not to mention simple selfishness and discourtesy to others. Or we approach life in ways that are self-defeating, leading to dissatisfaction and unhappiness. Or both. In Christian terms, we sin. In secular terms, we succumb to akrasia, the vice of weakness of will. Lacking self-control, we act against our better judgment.

There is no shortage of advice as to what to do about such unfortunate circumstances. Christians advise us to repent and get right with God. Buddhists advise us to cultivate mindfulness and compassion. Stoics advise us to quit worrying about things we have no control over and make good choices about the ones we do. In my book I explain a number of ways we can take advantage of our uniquely human ability to think about our own thinking and avoid emotional rigidities that impair our ability to make good choices.

But what about the moral norms themselves, whether or not we live up to them? Once we get some clarity about the fact that we have such norms and what they are, we get to question them. What is their basis? Why should we follow them? We feel the obligation to do good, be fair, and so forth, but why should we? These are meta-ethical questions having to do with the ontology of morals. And that is a topic for another time.


(1) Lucas.

(2) Haidt and Joseph, “Intuitive Ethics,” p. 56.

(3) Haidt, The Righteous Mind, pp. 123–127 and 170–176.

(4) Beck, 29 August 2018.

(5) Singer, “Famine, Affluence, and Morality,” p. 238.

(6) Ibid., pp. 231-232.

(7) Specter, “The Dangerous Philosopher,” p. 55.

(8) Unite For Sight, “Fighting Hunger.”


[Beck, 29 August 2018] Beck, Richard. “Yet More On Moral Hallowing.” Online publication as of 30 August 2018.

Haidt, Jonathan. The Righteous Mind: Why Good People Are Divided by Politics and Religion. New York: Pantheon Books, 2012.

Haidt, Jonathan, and Joseph Craig. “Intuitive ethics: How Innately Prepared Intuitions Generate Culturally Variable Virtues.” Online publication available upon request from the author at as of 9 September 2018.

Lucas, Richard, comment on Meacham, “Moral Hallowing.” Original comment posted on Google Plus, reposted at

Meacham, Bill. How To Be An Excellent Human: Mysticism, Evolutionary Psychology and the Good Life. Austin, Texas: Earth Harmony, 2013. Available at

Singer, Peter. “Famine, Affluence, and Morality.” Philosophy & Public Affairs, Vol. 1, No. 3 (Spring, 1972), pp. 229-243. Online publication as of 12 April 2017.

Specter, Michael. “The Dangerous Philosopher.” The New Yorker, 6 September 1999, pp. 46-55. Online publication as of 8 September 2018.

Unite For Sight. “Fighting Hunger.” Online publication as of 10 September 2018.

Sep 4 18

Moral Hallowing

by Bill Meacham

A reader of this blog has asked me what I think of the assertion by Richard Beck, a psychology professor at Abilene Christian University, that “everyone engages in moral hallowing.”(1) In short, I’m not entirely impressed.

The term “hallowing” may not be familiar to many of us, as it seems to be a distinctly Christian notion and is now a bit archaic. The word “hallow” is related to “holy”, and to hallow something means to regard it as holy. Beck defines hallowing as “setting something apart, recognizing [it] as holy and sacred.”(2) Moral hallowing, then, is regarding moral judgments as being set apart in some way from non-sacred judgments such as the findings of the physical sciences and our everyday observations of what goes on around us.

In a series of three blog posts, Beck examines the notion of moral hallowing. In the first of them, he makes a rather strong assertion:

Everyone, even atheists, engage[s] in moral hallowing. Everyone evaluates human moral actions sub specie aeternitatis [from the perspective of the eternal]. Everyone cares about God’s judgment.(3)

On the face of it, this claim is obviously false. Quite a number of people care not a whit about God’s judgment, believing that there is no such thing. Apparently several of his readers made the same objection, because in subsequent posts he rephrases his position to something more secular. (I say “apparently” because he refers to comments that are not visible on his website due to a technical glitch.)

His rephrased position is that morality requires metaphysical grounding, meaning that morality must refer to axioms or first principles or basic norms that are not grounded in anything else. Citing Hume, he notes that you can’t generate a moral “ought” from a purely factual “is”. He says, “The biggest axioms in moral hallowing are concepts like ‘good,’ ‘evil,’ ‘should,’ ‘wrong,’ [and] ‘ought.'” According to Beck, judgments using such concepts are not based on physical facts and come loaded with an expectation of compliance, that they are binding on our conduct, in a way that purely factual statements do not.(4)

So far, so good. It is true that as a matter of purely descriptive psychology, moral judgments differ from factual judgments. As Steven Pinker notes, moral judgments have specific cognitive, behavioral and emotional characteristics. Cognitively, the rules they evoke are taken to apply without exception. Prohibitions against rape and murder are believed to be universal and objective, not matters of local custom; and people who violate the rules are deemed to deserve punishment. Behaviorally, we do in fact punish moral offenders and praise those who obey the law in ways that do not apply to, for instance, people who merely wear unstylish clothes. Emotionally, when our sense of morality is triggered, we feel a glow of righteousness when we abide by the rules, guilt when we don’t, a sense of anger or resentment at those who violate the rules and a desire to recruit others to allegiance to them.(5)

But to note these facts about morality is not to ground moral judgments metaphysically, nor is it to hallow them. Beck posits that a basic norm underlies all moral judgments. He says “The basic norm is an authorizing norm that is not authorized by any other (higher) norm. Consequently, the normative authority of the basic norm (i.e., why we must obey it) has to be presupposed as valid and authoritative. That’s the only way to stop an infinite regress.”(6) He thinks that everyone who makes moral judgments, if he or she thinks about it enough, will end up with a basic norm, although presumably religious believers will have a different one from secularists. But that’s not necessarily the case.

There is a respectable position in moral philosophy called “moral anti-realism”. It says that, contrary to popular opinion, there are no moral facts, expressible in propositions like “Murder is wrong,” that exist independently of our belief that they do. Physical objects like rocks, trees and mugs of beer exist in their own right whether or not we think they do; but according to moral anti-realism, moral facts don’t. There are various flavors of this position. One, noncognitivism (Beck calls it “emotivism”), holds that moral statements express only emotions, not facts. To say “Murder is wrong” is just to say “Boo, murder!” Another, moral error theory, says that moral statements purport to express facts, but all such statements are false because there are no moral facts. Another, nonobjectivism, says that moral facts do exist, but not independently of human minds; they are something that we in some way make up.(7) There are numerous permutations and elaborations of these positions, but my point is that someone who holds an anti-realist position might very well not hold any basic norm that authorizes or grounds specific moral judgments.

Beck’s statement that everyone engages in moral hallowing is both misleading and false. It is misleading because non-religious people don’t engage in setting apart anything as holy at all. They may set some things apart from ordinary life, but they would rightfully object to calling such things holy. To say to his Christian audience that atheists do what Christians do but they just call it something different is disingenuous. And Beck’s statement is false because some people, quite thoughtful ones, don’t assert a basic moral norm at all.

That said, there is no doubt some value (not moral value, just value for our own well-being) in thinking about the moral attitudes we have and what their basis could reasonably be. Knowing ourselves, as the Oracle at Delphi observed, is a great good. But it is not necessarily our moral duty.


(1) Beck, 20 August 2018.

(2) Ibid.

(3) Ibid.

(4) Beck, 27 August 2018.

(5) Pinker, “The Moral Instinct.” See my book How To Be An Excellent Human, Chapter 21, “Our Sense of Morality.”

(6) Beck, 29 August 2018.

(7) Joyce, “Moral Anti-realism.”


[Beck, 20 August 2018] Beck, Richard. “Progressives On Judgment and Hell: Part 3, Moral Hallowing.” Online publication as of 30 August 2018.

[Beck, 27 August 2018] Beck, Richard. “Once More On Moral Hallowing.” Online publication as of 30 August 2018.

[Beck, 29 August 2018] Beck, Richard. “Yet More On Moral Hallowing.” Online publication as of 30 August 2018.

Joyce, Richard. “Moral Anti-realism.” The Stanford Encyclopedia of Philosophy (Winter 2016 Edition). Online publication as of 27 August 2015.

Meacham, Bill. How To Be An Excellent Human: Mysticism, Evolutionary Psychology and the Good Life. Austin, Texas: Earth Harmony, 2013. Available at

Pinker, Steven. “The Moral Instinct.” Online publication as of 12 January 2008.

Jul 1 18

Sartre’s Bad Logic

by Bill Meacham

Last time I asserted that being conscious of something does not always or necessarily include or entail being conscious of being conscious. French existentialist Jean-Paul Sartre thinks it does. Since the Monday evening group of the philosophy club is studying Sartre, it seems appropriate to examine his argument. It’s a bad one. Sartre falls prey to faulty logic.

In the introduction to his influential Being and Nothingness, Sartre asserts that “every positional consciousness of an object is at the same time a non-positional consciousness of itself.”(1) He says that “the necessary … condition for a knowing consciousness to be knowledge of its object is that it be consciousness of itself as being that knowledge.”(2) Let’s ignore the somewhat mysterious usage of the term “consciousness,” as if consciousness were an agent who is conscious. Let’s ignore also the fact that he seems to conflate being conscious of and being conscious that.(3) Instead, I want to focus on the logic of his argument. Here it is in its entirety:

This is a necessary condition, for if my consciousness were not consciousness of being consciousness of the table, it would then be consciousness of that table without consciousness of being so. In other words, it would be a consciousness ignorant of itself, an unconscious—which is absurd.(4)

The form is a reductio ad absurdum. From the premise that consciousness of a table is not consciousness of consciousness of the table he purports to derive a contradiction, that consciousness is unconscious. Hence, the premise is false, and consciousness of a table is consciousness of consciousness of the table. But the argument fails because Sartre begs the question.

The passage contains four phrases, the first three of which have the same meaning:

  • consciousness [that is] not consciousness of being consciousness of the table
  • consciousness of that table without consciousness of being so
  • consciousness ignorant of itself

So far, he just says the same thing in different words. But then he asserts something else:

  • an unconscious

The movement from the third phrase to the final one is not explained, nor is it authorized by reference to any rule of logic. Sartre just asserts that consciousness ignorant of itself is the same as an unconscious. But that is just what he wants to prove! His premises include his desired conclusion, so the inference proves nothing.

Not only does Sartre fail to prove his point, it is patently false that consciousness ignorant of itself is an unconscious. Rephrasing the idea, Sartre asserts that an instance or episode of being conscious of a table that does not include being aware of being conscious of the table is thereby unconscious. But that episode of being conscious is certainly not unconscious! The contents of that episode of being conscious include the table, focally, and a great number of other things that are not in focus: the floor, the ambient temperature, perhaps objects on the table, etc. It is quite the opposite of unconscious, a state in which none of those contents would be present to one.

It is peculiar that Sartre, examining experience in the phenomenological tradition of Husserl, should rely on logic for such an important assertion about the nature of experience rather than on direct observation. But he does, and he gets some other things wrong about Husserl as well. But that is a topic for another time.


(1) Sartre, Being and Nothingness, p. liii.

(2) Idem, p. lii.

(3) The phrase “as being that knowledge” indicates what Dretske calls awareness of fact, having an idea or concept. In this case, the concept seems to be that one knows or is a knower. Again, Sartre’s language is less than perfectly clear.

(4) Sartre, Being and Nothingness, p. lii.


Dretske, Fred. “Conscious Experience.” Mind, New Series, Vol. 102, No. 406 (Apr., 1993), pp. 263-283. Online publication as of 18 December 2015.

Sartre, Jean-Paul. Being and Nothingness. Translated by Hazel E. Barnes. New York: Philosophical Library, 1956.

Jun 10 18

Being Conscious of Being Conscious

by Bill Meacham

Last time we looked at the surprising ability of plants, which seem to be agents in their own right, to seek goals and act so as to achieve them. A recent popular survey of environmental scientists and philosophers asks a similar question, whether plants are conscious.(1) Some say they obviously are because they respond to their environment, gather information and act with discernment in a way that non-living things such as rocks do not. Others insist that plants are not conscious because they have no ability to be conscious of themselves. Being conscious, in this view, requires being aware of oneself as well as of one’s surroundings.

But that assertion raises a number of issues. Is being conscious different from being aware? What is it to be conscious, or aware, in the first place? And what is this self, of which, some assert, we must be aware in order to be conscious at all?

Many people use the term “aware” to mean something different from “conscious.” For instance, professor Heidi Appel says “Are plants conscious? My view is that they are not, even though they are aware of many aspects of the environment in which they live.”(2) The problem is that in English the two terms mean roughly the same thing. “Conscious” is from a Latin root, and “aware” is from Old Saxon, but otherwise they are each defined in terms of the other.(3) Many other languages have only one term for both the English words: “bewusst” in German and “consciente” in Spanish, for instance. Others have two, but they do not translate directly to the two in English. We find “consciente” and “ciente” in Portuguese and “conscient” and “au courant” in French.

If we substitute “conscious” for “aware,” then, what Appel asserts is that plants are not conscious even though they are. That can’t be what she means. What Appel seems to be getting at, perhaps, is that when one is conscious, what one is conscious of is more intense or clear or in focus than it would be if one were merely aware of it. I say “perhaps” because what she means is not at all clear. Does she mean that the world appears to us more vividly or more in focus than it does to plants? How could we possibly know?

Sometimes “aware” connotes being informed or knowledgeable in a way that “conscious” does not. If you want to say that someone knows the rules, “She is aware of the rules” sounds better than “She is conscious of the rules.” Does Appel mean to say that plants know many aspects of their environment, but not in the same way or as much or as well as humans do? Maybe by “aware” she means only that plants respond to their environment. Again, her meaning is not clear.

What are we to make of this confusion? My preference is to use the terms interchangeably.(4) If nothing else, it makes translation into other languages easier. Many times we can dispense with the problematic terms altogether. If you want to emphasize the intensity or vividness of someone’s experience, just say that she is intensely or vividly conscious of what is before her. If you want to emphasize someone’s knowledge of something, just say she knows it.

Maybe Appel means that humans have a second-order capability that plants lack, that we are, or can be, conscious of being conscious, but plants aren’t and can’t. There is a surprising amount of controversy about whether and to what extent one must be conscious of being conscious in order to be conscious at all, and the nuances of the debate are instructive.

Does being conscious always include some element of being peripherally aware, if not fully conscious, of one’s own process or activity of being conscious? Sometimes being conscious of an object does include thinking about one’s experience in addition to focusing on and thinking about the object itself. (By “object” I do not mean to imply something existing external to the one who is conscious, as naive realism would have it. I mean only whatever appears to one. The tree in a hallucination of a tree is as much an object as it is in a perception of a tree or in a mental image of a tree.) At such times one puts some attention on the fact that one is conscious of something, as well as on the object of which one is conscious. That this type of experience is always vivid and always leaves memories leads some to believe that being conscious always includes some degree of being aware of being conscious.

One of them is contemporary philosopher Galen Strawson. In an extended essay on the subject, he tries to tease out ways of speaking that adequately express the assertion that being conscious always includes some degree of being aware of being conscious. He expresses it in various ways:

(1) All awareness involves awareness of that very awareness.(5)

Substituting synonyms, he gets the following:

(1a) All consciousness involves consciousness of that very consciousness.(6)

(1b) All experience involves experience of that very experience.(7)

“Awareness” and “consciousness” are both nouns, and so is “experience” in this context. But “experience” can also be a verb, as in “She experienced the concert with delight.” Strawson changes “experience” to the gerund form of the verb, which has the advantage of emphasizing its active, processual nature:

(1c) All experiencing involves experiencing of that very experiencing.(8)

(1d) All experiencing involves experiencing that very experiencing.(9)

He does not so much argue for his assertion as hope that at least one of his formulations will appeal to and convince his readers. Concerning the last, he asks us to “listen for the sense in which (1d) is necessarily true.”(10)

Well, I’ve listened, and I don’t hear it. I contend that not all instances of being conscious include some degree of being aware of being conscious. Only some of them do. Before I argue for that position, let’s look more carefully at what is being asserted.

There are other ways of expressing Strawson’s thesis, and he helpfully lists a few from other authors: Alvin Goldman: “In the process of thinking about x there is already an implicit awareness that one is thinking about x.” René Descartes: “When I will or fear something, I simultaneously perceive that I will or fear.” John Locke: “Thinking consists in being conscious that one thinks.” Sartre: “Consciousness is conscious of itself, that is, the fundamental mode of existence of consciousness is to be consciousness of itself.” Aron Gurwitsch: “Consciousness … is consciousness of an object on the one hand and an inner awareness of itself on the other hand. Being confronted with an object, I am at once conscious of this object and aware of my being conscious of it.”(11)

Canadian philosopher Leslie Dewart puts it this way:

[An] invariable element of experiencing an object consciously consists in experiencing, moreover, that the object is being experienced. … Careful introspection reveals that we can never be consciously aware of anything without being thereby—through the same act and at the same time—aware that we are aware of it. … In every conscious experience the act of experiencing is present to itself.(12)

In all these different ways of putting the matter no one is arguing that every instance of being conscious involves two separate things, focusing on whatever one is conscious of and in addition focusing on the act or process of being conscious of it. Strawson says that “we’re rarely in a state of awareness taking a state of awareness as an express object of reflective attention.”(13) Rather, they say that being conscious is all one thing (using the term “thing” loosely). They say that the state of being conscious of being conscious is “non-positional,” “nonreflective,” “pre-reflective,” “low-level,” “non-conceptual,” “non-observational,” or “non-thetic.”(14)

So what is this non-observational, nonreflective state? Two different things are asserted about it:

A. That being conscious always involves being aware that one is conscious.

B. That being conscious always involves being aware of being conscious.

These assertions are not identical. Being conscious or being aware that means having an idea or concept. If someone says she is aware that her car is in the driveway, it means that she knows that her car is so located, that she has an idea of where her car is. If someone says she is aware that she is conscious it means that she has at least a dim idea that she is conscious (of whatever she is conscious of).

Being conscious of means being in direct perceptual contact with that of which one is conscious. Being conscious of one’s car, for instance by seeing it, is not the same as knowing its location. Nor is being dimly aware of one’s car, for instance by seeing it out of the corner of one’s eye. I suppose it would be difficult to be conscious of something without knowing its location, but certainly one can know something’s location without directly perceiving it. The two are not the same.

Being Aware That One is Conscious

I’ll return to being aware of shortly. Let’s first examine assertion A, that being conscious always involves being aware that one is conscious. The thesis, as I said, is not that whenever one is conscious of something one is also explicitly and focally conscious of the thought of one’s being conscious. Certainly most of the time we are not. The thesis rather seems to be that in every moment of experience something is present that counts as knowledge that one is experiencing. Such knowledge is said to be non-thetic, meaning that it is not in the focus of attention. The thesis is that being conscious always contains some element—sometimes more pronounced and sometimes less so—of knowledge that one is conscious.

I think careful observation of experience will show that sometimes it does and sometimes it does not.

What we call conscious experience does indeed sometimes contain some element of knowing that one is conscious. One can be dimly aware, if not fully conscious, of the idea that one is conscious, that is, of oneself as being conscious. Such an idea may be verbal or visual or some combination or present in some other mode; the exact mode may well vary from person to person. It has the ability to lead one in some way to action or further thought about oneself or one’s experience.

But such an idea is not always present. What is always present in vivid experience that leaves memories is, in addition to the object being paid attention to, thinking that bears some relation to the object of attention. The more such thinking is present, the more vivid is one’s ordinary experience and the stronger one’s memory. The thinking may be about the object or it may be about the subjectivity of one’s experience or both. But it is not necessary that it be about one’s subjectivity. It is enough that it be about the object.

I am willing to grant that every instance of being conscious has the potential of including knowledge that one is conscious, but not that every instance does in fact do so. At best we may have tacit knowledge—knowledge that is not presently thematic but could become thematic, or attended to—that we are conscious. But that knowledge is not always present in experience, even dimly, in the form of thinking or having an idea. It is often just not there at all. Phenomenologically, we are not always in fact aware that we are aware.

Of course, whenever one thinks to “look,” one finds oneself knowing that one is conscious. If one is engaged in questioning whether being conscious must entail or whether it always includes being aware that one is conscious and one examines one’s experience, then one will naturally find such thoughts and observations in the background. The trick is to examine one’s retention of what one was conscious of just before one thought to “look.” In a great many cases, one will find no such conceptual content. (Or, to be precise, in most cases, I, the author, have found no such conceptual content.)

Being Aware Of Being Conscious

Being aware that one is conscious is not a universal characteristic of being conscious. What of assertion B, that being aware of being conscious is? Again, the assertion is not that we are always explicitly and focally conscious of being conscious, for obviously most of the time we are not. Instead, our mode of being aware of being conscious is said to be “pre-reflective, “non-reflective,” “low-level,” “non-conceptual,” “non-observational,” “non-positional” or “non-thetic.” The thesis seems to be that something that is reasonably described as being conscious of being conscious is present in one’s experience at all times at least dimly. But what is that something?

A clue is found in an activity that does entail being conscious of being conscious in an explicit and focused way: the practice of mindfulness meditation, which Strawson recommends.(15) The practice consists of “paying precise, nonjudgmental attention to the details of our experience as it arises and subsides.”(16) One sits quietly with spine erect and simply pays attention to what is happening. Typically one focuses on one’s breath, specifically on the physical, tactile sensation of the air that passes in and out of the nostrils.(17) As one does so, one notices not only the breath but also the thoughts, feelings, bodily sensations and other subjective events that occur. One does not, however, follow or cling to any of them; instead, when one notices that one’s attention has wandered, one returns focus to the breath. The effects are said to be a sense of calmness, a heightened sensitivity to one’s own subjective thoughts and reactions to events when one is not meditating, an insight into the essential nature of reality, and ultimately release from suffering.(18) Be that as it may, what one focuses on is what we can metaphorically call the contents of our experience, including those we take to be subjective. The more one pays attention, the more one finds all sorts of things that one normally overlooks: thoughts, feelings, incipient actions and even structural features such as what C.S. Peirce calls “perceptual judgments.”(19) Being conscious of being conscious in this case means bringing to explicit attention the subjective elements in one’s experience.

The question is whether there is an attenuated form of such mindful being aware in every instance or episode of being conscious. I think not. In my (the author’s) own case, I find lots of times when nothing of the sort is present. Indeed, the reason one practices such meditation is to increase the quantity and duration of moments of mindfulness. Most of the time I—and, I suppose, most of us—am not at all mindful of a great deal of the contents of my experience.

If that sort of mindfulness is not what Strawson and others mean by being aware non-thetically of being conscious, then, sorry, I don’t know what such being aware is. I have never observed it. I don’t find it in my experience at all. And Strawson himself agrees that it is not obvious to everyone: “It can seem natural to say that we’re often not aware of our awareness—not only when we’re watching an exciting movie but also in most of daily life.”(20)

Whence the Confusion?

If it is not all that obvious, one wonders how the idea arose that being conscious always involves being aware that one is conscious or being aware of being conscious. Perhaps there are elements in experience that are constant enough that one might take them to be evidence for these assertions.

One such element is the self-sense, the background sense of oneself that is present all the time, whether one pays attention to it or not. The self-sense is what gives one a feeling of continuity, extending far into one’s past; is what lets one know, without thinking about it, when one gets up in the morning that one is the same person who went to sleep last night. It is the confluence of one’s bodily feelings, one’s moods and emotions, beliefs, evaluations of oneself, and the feelings concomitant with one’s actions. It is present continuously, though most often unnoticed, in all one’s experience, reflective and unreflective. My (the author’s) investigations lead me to believe that such a self-sense is continually present; at least it is there whenever I “look.” That it is a subjective sense, not available to observation by others, might lead some people to use the term “consciousness” to denote it, taking that term to mean subjectivity. But this self-sense is something of which one is or can be aware. It does not count as something that is reasonably described as the activity or process being conscious. (We can say that the self is conscious, but not that the self-sense is.)

Another element is self-consciousness in the ordinary sense, a state in which one knows or senses that one is being observed or might be observed by others. Self-consciousness often includes feelings of embarrassment or fear of being judged, but sometimes it can include feelings of confidence or enjoyment of the attention of others. Since we humans are highly social animals, some might postulate that in each of us a minimal form of self-consciousness is present all the time, even in the absence of other people. And the feelings of embarrassment or pride are subjective, so a sense of “consciousness” as roughly meaning subjectivity might be thought appropriate to denote it. But again, thinking that one is or might be observed is not the same as being conscious of being conscious. And besides, people are generally not self-conscious in this sense all the time, even minimally.

Perhaps there are other elements in experience that might lead someone to think that “being conscious of being conscious” is appropriate to describe them. If so, I suspect that such descriptions will be amenable to clarification through using more precise language and that they too will prove untenable.

Problem: How to Adjudicate

I assert that it is not the case that every occasion of being conscious includes some pre-reflective or pre-conceptual thought that one is conscious, nor does it include some element of being aware of being conscious. This assertion is controversial, as it disagrees with a number of very prominent phenomenologists. My disagreement with them reveals a systematic weakness in the first-person point of view, whether that be phenomenological in the Husserlian sense or merely introspective: there is no way to tell who is right! Strawson and others say introspection always reveals something that I say it doesn’t. How can we decide which one of us is correct?

We can do several things. We can “look” again and describe as clearly as we can what we find. We can ask others to examine their experience and tell us what they find. Both of these efforts will be helped by using language in a standard way, as I advise.(21) In addition, we can describe as clearly as we can the process by which we have examined our experience. Such processes include, for instance, examination of experience while the examined experience is going on, of retained experience immediately afterwards, of remembered experience some time after, of imagined experience in the manner of Husserl’s eidetic variation, etc. And we can ask others to describe their process in order to see whether different processes lead to different results. But ultimately it is up to each one of us to find what we find and remain true to it.

So What?

If you have read this far, you might think that the whole issue is rather arcane, even too arcane to care about. Whether or not our experience is always “present to itself” as Dewart says seems to have little relevance to our everyday life. And, in a way, you are right. It doesn’t really matter whether or not you end up believing that every occasion of being conscious somehow includes or involves or entails being conscious of being conscious. But what does matter is how you arrive at your conclusion. If you just take someone’s word for it, you have basically wasted your time. But if you investigate for yourself by examining closely your own experience, you will learn some things. You will learn how your mind works: the ways it influences how you see the world and how it affects your ability to operate in the world. Conceptually, if you are interested, you can find some answers to questions such as what being conscious and what the self which is conscious actually are. Practically, you will have a chance to become more effective in your chosen pursuits and even, perhaps, gain insight into what pursuits are worth choosing. You will learn to know yourself, as the Oracle at Delphi advised, and gain the benefits of an examined life.


(1) Kolitz, “Are Plants Conscious?”

(2) Ibid.

(3), “Conscious” and “Aware.”

(4) Meacham, “How to Talk About Subjectivity.”

(5) Strawson, “Self-Intimation,” p. 139.

(6) Idem., p. 143.

(7) Ibid.

(8) Ibid.

(9) Ibid.

(10) Ibid.

(11) Idem., p. 148, 149, 150, 152.

(12) Dewart, Evolution and Consciousness, pp. 38-39, emphasis in original.

(13) Strawson, p. 142.

(14) Gallagher and Zahavi, “Phenomenological Approaches to Self-Consciousness,” section 1.

(15) Strawson, p. 154, fn. 51.

(16) Wegela, “How to Practice Mindfulness Meditation.”

(17) Thanissaro Bhikkhu. “One Tool Among Many: The Place of Vipassana in Buddhist Practice.”

(18) Ibid.

(19) Peirce, Collected Papers, Vol. V, ed. Charles Hartshorne and Paul Weiss, pp. 38, 114-115.

(20) Strawson, p. 142.

(21) Meacham, “How to Talk About Subjectivity.”


Bhikkhu, Thanissaro. “One Tool Among Many: The Place of Vipassana in Buddhist Practice.” Online publication as of 29 February 2015.

Dewart, Leslie. Evolution and Consciousness: The Role of Speech in the Origin and Development of Human Nature. Toronto: University of Toronto Press, 1989. “Aware.” Online publication as of 4 May 2016. “Conscious.” Online publication as of 4 May 2016.

Gallagher, Shaun, and Dan Zahavi. “Phenomenological Approaches to Self-Consciousness.” Stanford Encyclopedia of Philosophy (Spring 2015 Edition), Edward N. Zalta, ed. Online publication as of 26 October 2015.

Kolitz, Daniel. “Are Plants Conscious?” Online publication as of 30 May 2018.

Meacham, Bill. “How to Talk About Subjectivity (Don’t Say ‘Consciousness’).” Journal of Consciousness, Vol. 19, No. 62, 1 January 2017 – 30 June 2017 (forthcoming). Online publication

Peirce, Charles Sanders. Collected Papers of Charles Sanders Peirce, Volumes V and VI. Edited by Charles Hartshorne and Paul Weiss. Cambridge: The Belknap Press of Harvard University Press, 1965.

Strawson, Galen. “Self-Intimation.” In The Subject of Experience. Oxford: Oxford University Press, 2017, pp. 136-164.

Wegela, Karen Kissel. “How to Practice Mindfulness Meditation.” Online publication as of 29 February 2016.

Feb 27 18

Do Plants Have Goals?

by Bill Meacham

The topic this time is plants, specifically, whether plants have goals, as sentient agents do. Contemporary philosopher Scott Sehon, echoing the intuitions of many, says they don’t. I’m not so sure.

Sehon’s concern is the concept of teleology, the attempt to explain things in terms of goals or purposes. (The term comes from the Greek telos, which means an end, purpose or goal.) In trying to untangle the nuances of the concept he asks whether and to what extent any of the following can reasonably be said to have goals:(1)

  • A rock remains motionless on the ground.
  • A marble rolls down the inside of a bowl.
  • A heat-seeking missile turns toward the north.
  • A plant turns toward the sun.
  • A spider runs across its web.
  • A cat climbs up a tree.
  • A person, Jackie, goes to the kitchen.

We can explain Jackie’s action by saying that she goes to the kitchen to get a drink. Getting a drink is her goal, or intention. We do not explain the rock’s remaining motionless by saying that it does so in order to maintain a constant velocity. The rock has no goal, it just responds to external forces, which at the moment are in equilibrium. The other cases are in between. The marble does not roll down in order to get to the bottom; it just responds to gravity. The heat-seeking missile acts as if it has a goal, but its goal is not its own; rather, someone has programmed the goal into it. The spider runs across its web to get to the prey ensnared there, and the cat runs up the tree to get away from a dog. These two seem to be clear-cut cases of having a goal, much as Jackie has the goal of getting a drink. But what about the plant?

We can explain the plant’s action by saying it turns toward the sun to get the most sunlight. But Sehon objects, saying "we are not comfortable with [the] apparent suggestion that we view the plant as an agent aiming for a particular goal."(2) He views the plant’s movement as a mere tropism, mechanical and not agential.

Now, the first thing to note is that while Sehon himself is uncomfortable, it is not at all clear that everyone else is. My wife, a gardener and landscape designer, would have no problem at all with saying that the plant moves in order to get more sunlight. Philosophers often appeal to their intuitions about what words mean or what one would say in a certain situation, but their intuitions may well be biased.

Sehon is a philosopher, not a scientist, but his appeal to intuition is a type of informal research using himself and his peers as subjects. As such a researcher he is susceptible to a criticism of contemporary behavioral science, that it uses research subjects who are not representative of the human population world wide. A recent review of behavioral science research finds that "subjects are taken largely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies" and that "members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans."(3) Sehon himself, a philosophy professor in Maine, is firmly among the WEIRD,(4) so the fact that he has an intuition that plants don’t have goals does not carry much weight as evidence. What would an indigenous hunter-gather in Brazil have to say? How about a pastoral nomad in Mongolia? A farmer in Uzbekistan?

Be that as it may, it is undeniable that plants differ from animals in significant ways. (Or so it seems to me, also a WEIRD person.) Plants are less like us, who clearly have goals, than animals. To the untrained eye, it is clear that animals can do a number of things that plants can’t, most notably move around freely and perceive things at a distance.

But the untrained eye can be deceived. Aristotle, for instance, characterized plants as being able to grow and reproduce, but not to perceive, going so far as to say that they have no sense of touch.(5) Nowadays we know that plants do perceive, and some have quite an obvious sense of touch. For an example, watch this video of a Mimosa Pudica, aka Sensitive Plant, whose compound leaves fold inward and droop when touched:(6)

There are many other parallels between plants and animals. It has been known for over 25 years that plants, even though they lack an animal nervous system, send nerve-like messages through their bodies via electrical signals.(7) Newer research finds much more evidence that plants have features analogous to nervous systems and brains:(8)

  • Plants have genes that are similar to those that specify components of animal nervous systems.
  • These genes specify proteins that behave in ways very similar to neural molecules.
  • Some plants have synapse-like regions between cells, across which neurotransmitter molecules facilitate cell-to-cell communication.
  • Many plants have vascular systems that look like they could act as conduits for impulses transmitted throughout the plant body.
  • Some plant cells display action potentials, events in which the electrical polarity across the cell membrane does a quick, temporary reversal, as occurs in animal neural cells. The behaviors of the Sensitive Plant and the Venus Flytrap are examples.

So plants send nerve-like messages within themselves. Does that mean they are intelligent enough to have goals? Are they, in other words, agents? Consider some additional evidence. Monica Gagliano, an animal ecologist at the University of Western Australia, did an experiment with the aforementioned Mimosa Pudica, here recounted by science journalist Michael Pollan.

Gagliano potted fifty-six mimosa plants and rigged a system to drop them from a height of fifteen centimeters every five seconds. Each "training session" involved sixty drops. She reported that some of the mimosas started to reopen their leaves after just four, five, or six drops, as if they had concluded that the stimulus could be safely ignored. "By the end, they were completely open," Gagliano said to the audience. "They couldn’t care less anymore."

Was it just fatigue? Apparently not: when the plants were shaken, they again closed up. "’Oh, this is something new,’" Gagliano said, imagining these events from the plants’ point of view. "You see, you want to be attuned to something new coming in. Then we went back to the drops, and they didn’t respond." Gagliano reported that she retested her plants after a week and found that they continued to disregard the drop stimulus, indicating that they "remembered" what they had learned. Even after twenty-eight days, the lesson had not been forgotten.(9)

This experiment certainly suggests that plants learn and remember. But are they really agents, with intentions, goals, desires and the like? We think they just stand in place like so much green furniture, but that’s because they move too slowly for us to notice. Consider this video of a bean plant shot with time-lapse photography provided by researcher Stefano Mancuso:(10)

I wonder if you will agree that the plant’s activity seems to be directed rather than flailing around aimlessly. To me (admittedly a WEIRD observer) it certainly seems to have a goal and to make efforts toward that goal. Pollan says

Mancuso’s video seems to show that this bean plant "knows" exactly where the metal pole is long before it makes contact with it. Mancuso speculates that the plant could be employing a form of echolocation. There is some evidence that plants make low clicking sounds as their cells elongate; it’s possible that they can sense the reflection of those sound waves bouncing off the metal pole.

The bean plant wastes no time or energy "looking"—that is, growing—anywhere but in the direction of the pole. And it is striving (there is no other word for it) to get there: reaching, stretching, throwing itself over and over like a fly rod, extending itself a few more inches with every cast, as it attempts to wrap its curling tip around the pole. As soon as contact is made, the plant appears to relax; its clenched leaves begin to flutter mildly.(11)

In addition to cultural biases, we humans have a generic bias: we see things easily in our time scale but not at all or only with difficulty in other time scales. Our invention of time-lapse photography enables us to see features of the world that we normally overlook entirely. One of these features is the agential, goal-directed nature of plants.

There is quite a bit of controversy among botanists about what all this means. That’s why, out of caution, Pollan puts words such as "knows" and "looking" in scare quotes. Some have called for the creation of a whole new field, to be called "plant neurobiology" because plant signaling is so much like animal neural activity and because plant behavior is too sophisticated to be explained by genetic and biochemical mechanisms.(12) Some, less confrontationally, call the field "plant signaling and behavior."(13) Others strongly disagree, going so far as to say that plant neurobiologists are from "the nuthouse."(14) The issue is largely semantic, since nobody questions the data, but it strikes at the core of our concept of ourselves. Are humans a special category of the living, different enough to be considered distinct from other animals and especially from plants? Or are we one end of a continuum of life that ranges without sharp demarcations from tiny, single-celled bacteria to extraordinarily complex human beings?

My own preference is the latter. The view that we are part of a continuum of life seems to fit the data better than the opposite view. And if widely adopted, it might prompt us to have more empathy for our fellow living creatures and to stop the ecological devastation that threatens our survival.

(1) Sehon, p. 160.

(2) Sehon, p. 161

(3) Henrich, et. al., p. 61.


(5) Aristotle, On The Soul, 2, 413a 26 – 413b 13. To be fair, he does not explicitly say that no plants have a sense of touch, but implies that assertion by contrasting them with animals, all of which do.

(6) Íñiguez, “Mimosa pudica – Sensitive Plant.”

(7) Yoon, “Plants Found to Send Nerve-Like Messages.”

(8) DeSalle, “Do Plants Have Brains?”

(9) Pollan, “The Intelligent Plant.”

(10) Pollan. “Plant Neurobiology.”

(11) Pollan, “The Intelligent Plant.”

(12) Brenner, et. al., “Plant neurobiology.”

(13) Baluska, et. al., Plant Signaling and Behavior.

(14) Pollan, “The Intelligent Plant.”

Aristotle. On the Soul, tr. Terence Irwin and Gail Fine. In Readings in Ancient Greek Philosophy: From Thales to Aristotle, Fourth Edition ed. S. Marc Cohen, et. al. Indianapolis: Hackett Publishing Company, 2011. Another translation is available online at

Baluska, Frantisek, et. al., eds. Plant Signaling and Behavior. Online at as of 24 February 2018.

Brenner, Eric D., et. al. “Plant neurobiology: an integrated view of plant signaling.” Trends in Plant Science Vol. 11 No. 8 (2006), pp. 423-418. Online publication as of 24 February 2018.

DeSalle, Rob, and Ian Tattersall. “Do Plants Have Brains?” Natural History. Online publication as of 20 February 2018.

Henrich, Joseph, et. al. “The weirdest people in the world?” Behavioral and Brain Sciences, Vol. 33 (2010), pp. 61 – 135. Online publication as of 23 February 2018.

Íñiguez, Ángel Daniel Alfaro, videographer. “Mimosa pudica – Sensitive Plant.” (video) Online publication as of 20 February 2018.

Pollan, Michael. “Plant Neurobiology.” (video) Online publication as of 20 February 2018.

Pollan, Michael. “The Intelligent Plant: Scientists debate a new way of understanding flora.” The New Yorker, 23 December 2013. Online publication as of 15 February 2018. I highly recommend this piece. It has far more to say than what I have quoted.

Sehon, Scott. Teleological Realism: Mind, Agency, and Explanation, Cambridge, MA: MIT Press, 2005.

Yoon, Carol Kaesuk. “Plants Found to Send Nerve-Like Messages.” New York Times, 17 November 1992. Online publication as of 20 February 2018.

Nov 19 17

A Harmful Ambiguity

by Bill Meacham

Massimo Pigliucci has written an entertaining book, Answers For Aristotle, about how recent scientific discoveries can shed light on perennial problems of philosophy and how philosophy can make sense of surprising new knowledge. Drawing on neuroscience, psychology, evolutionary biology and other disciplines, he shows that we need both science and philosophy to make sense of who we are and how best to live our lives. Pigliucci is a skillful wrier, and the book is enjoyable and informative. But it has an annoying flaw: historical inaccuracy and conceptual confusion stemming from ambiguous language.

Pigliucci contrasts three approaches to deciding what to do in morally problematic situations. Deontology, from a Greek word meaning duty, tells us to follow moral rules because they tell us the right thing to do. Moral rules may be taken to come from divine decree or from the dictates of rationality or from a special faculty of intuition; but however we come to know them, they are to be followed regardless of their consequences. This principle is taken to the extreme by Kant, who asserts that it would be wrong to tell a lie to a murderer who asks whether our friend who is being pursued by the murderer has taken refuge in our house.(1) Most of us find such rigid honesty morally repugnant.

Consequentialism, by contrast, tells us that the consequences of our actions are of primary importance, regardless of the rules. Its best-known variant is Utilitarianism, which says that we should try to produce the greatest happiness for the greatest number of people. Consequentialism evaluates moral choices in terms of the consequences of our actions regardless of whether they are in accord with moral rules. Taken to the extreme, this approach would have us sacrifice one healthy man to harvest his organs for several others who need them. This too we find morally repugnant.

Both deontology and consequentialism, despite seeming differences, are actually quite similar. Both are in what I have called the Rightness paradigm, being ways to find the right thing to do.(2) The third approach, virtue ethics, is in the Goodness paradigm. It gives us advice about how to live a good life by cultivating morally laudable traits of character such as honesty, courage, moderation and the like. This approach does not tell us what to do in particular quandaries. Instead it tells us what kind of person to be such that we will do what is morally appropriate almost automatically. We will act because of who we are, not because we have figured out what to do by consulting a moral system. Virtue ethics originated in ancient Greece and found its fullest flowering in Aristotle. The point of cultivating virtues, according to that famous philosopher, is to be able to live a life of eudaimonia, that is, a life of happiness or, better, flourishing or fulfillment. It’s not just the feeling of being happy that is the goal, but really functioning well in all areas of life.

Now consider this statement by Pigliucci:

According to virtue ethics … human beings need to steer themselves in the direction of virtuous behavior both because that is the right thing to do and because the very point of life is to live it in a eudaimonic way.(3)

There are two different assertions here, and only one of them is historically accurate. Aristotle does indeed claim that, as a factual matter, human beings seek happiness (eudaimonia) above all else since “we always choose it because of itself, never because of something else.”(4) That is, we choose, for instance, health over illness because health makes us happier, but we choose happiness just for itself, not for any other reason. Aristotle goes on to claim that what makes us happy is the exercise of our distinctly human function, which is the ability to reason.(5) And not just to reason, but to reason well, that is, excellently. (The Greek word areté, often translated as “virtue,” also means excellence.) Life is activity, so the happy life is an active one that is governed by reasoning well: “The human good turns out to be the soul’s activity that expresses virtue.”(6) (Again, read “excellence” for “virtue”, and nowadays we would say “mind” instead of “soul.”) Happiness is to be found in activity governed by excellent reasoning. The activities in life that were taken to be characteristic of an excellent man—and it was free men that Aristotle addressed, not slaves or women—were virtues such as courage, moderation, generosity, honesty and so forth. Aristotle has a lot to say about the nature of these virtues, which need not concern us here. The point is that Pigliucci is correct in saying that we are well advised to steer ourselves toward virtuous behavior because doing so will bring us a happy—that is, a flourishing or fulfilled—life.

But Pigliucci’s other assertion, that behaving virtuously is the right thing to do, misunderstands how rightness figures into Aristotle’s thought. The term “right” is ambiguous. It can mean to be in accordance with a moral law; that’s what we moderns mean when we speak of doing the right thing. But it can also mean to be appropriate or fitting, as when we speak of wearing the right clothes for a social occasion. Aristotle does speak of rightness. Famously, he says that virtue of character entails feelings and actions that are had or done “at the right times, about the right things, towards the right people, for the right end, and in the right way….”(7) But “right” in this context means what is generally accepted and approved by Athenian gentlemen of the time, not what accords with what a moral rule dictates.

As Elizabeth Anscombe has pointed out tersely and Alasdair MacIntyre much more comprehensively, our concepts of moral obligation, duty, rightness and wrongness are holdovers from a conception of ethics that no longer holds much power; and, says Anscombe, those concepts are harmful without it.(8) That conception, which arose with Judaism and Christianity, is the idea of divine law, a legal code issued by God and to which all God’s creatures are subject. Certainly the idea is not dead. Lots of people believe in a law-giving God, and most of them insist that their idea of what God commands is the right one, an attitude that promotes much strife. But for many, perhaps most, others, the idea of God has little relevance, and Anscombe’s point is well taken. It is peculiar that Pigliucci uses “right” in this sense, because he devotes three chapters to debunking belief in the existence of God.

I have argued that confusing the concepts of goodness and rightness is harmful because it inhibits clear thinking.(9) I have also argued that it makes more sense to think in terms of goodness, but that is not my point here. My point is that it is a shame to see such an otherwise cogent thinker make such a basic mistake.


(1) Kant, “On A Supposed Right To Lie.”

(2) Meacham, “The Good and The Right.”

(3) Pigliucci, Answers For Aristotle, p. 72.

(4) Aristotle, Nicomachean Ethics I.7, 1097b1, trans. Irwin.

(5) Ibid., 1097b22-29.

(6) Ibid., 1098a16.

(7) Aristotle, Nicomachean Ethics II.6, 1106b18-24.

(8) Anscombe, “Modern Moral Philosophy.” MacIntyre, After Virtue.

(9) Meacham, “The Good and The Right.”


Anscombe, G.E.M. “Modern Moral Philosophy.” Philosophy No l. 33, No. 124, January 1958. Online publication as of 27 October 2015.

Aristotle. Nicomachean Ethics, trans. T. Irwin. In Readings in Ancient Greek Philosophy: From Thales to Aristotle, Fourth Edition, ed. S. Marc Cohen et. al. Indianapolis: Hackett, 2011.

Kant, Immanuel, “On A Supposed Right To Lie Because of Altruistic Motives.” Online publication as of 19 November 2017. Also as of 19 November 2017.

MacIntyre, Alasdair. After Virtue: A Study in Moral Philosophy, Third Edition. Notre Dame: University of Notre Dame Press, 2007.

Meacham, Bill. “The Good and The Right.” Online publication

Pigliucci, Massimo. Answers For Aristotle: How Science and Philosophy Can Lead Us to a More Meaningful Life. New York: Basic Books, 2012.