Skip to content

AI Sentience

by Bill Meacham on August 6th, 2022
Digital AI face

Could an artificial intelligence (AI) be sentient? How could we tell? A recent opinion piece in the New York Times claims that AIs are not sentient.(1) The article raises an interesting question but does not adequately answer it, in part because it conflates sentience and intelligence and in part because its language is confused. Here is an example of the confusion:

There is no evidence this technology is sentient or conscious — two words that describe an awareness of the surrounding world.

This sentence is little more than a tautology, since in English “conscious” and “aware” mean the same thing.(2) It basically says that “sentient” means “conscious” and “conscious” means being conscious.

Here is another:

Sentience — the ability to experience feelings and sensations — is not something easily measured. Nor is consciousness — being awake and aware of your surroundings.

So sentience is the ability to experience — that is, the ability to be conscious of — feelings and sensations, and being conscious is (substituting equivalent words) being awake and conscious. That doesn’t tell us much.

Confused as this language is, we can agree that “sentient” means being conscious. But in what sense? Does it mean having the capacity to be conscious or actually being conscious during some period of time? Or both? Clearly if something has no ability to be conscious, it can’t be conscious in any given duration of time, so let’s say it means both. The author is claiming that AIs have no capacity to be conscious and are never conscious of their world. Unfortunately, he offers little evidence for his assertion.

Before we get there, let’s dispose of a related question, whether an AI can be intelligent. Here is a definition of intelligence:

The ability to learn — the ability to take in new context and solve something in a new way — is intelligence.

Obviously many AIs are intelligent, albeit artificially so. That’s the whole point of the AI enterprise, to create things that can learn and solve problems in a new way. When Google Assistant or Alexa hears what you say and gives a relevant and useful response, that’s AI at work. And we can judge how intelligent they are by the relevance of their responses. Siri, for instance, comes in a poor third in such a contest. But are they conscious?

We need to distinguish two aspects of being conscious, first pointed out many years ago by philosopher Ned Block.(3) He calls them “phenomenal consciousness” and “access consciousness,” which I prefer to restate as being conscious in phenomenal mode and being conscious in access mode. In phenomenal mode we see colors and shapes, we hear sounds, we smell aromas, etc. In access mode we have ideas or mental representations of what we are phenomenally conscious of, and these ideas enable us to do something with it, such as reason about it, say something about it or take some action on it. We are able to do these things with something of which we are phenomenally conscious because we have a representation of it in our mind.

In most cases the phenomenal aspect and the access-enabling aspect occur together. That’s why many use the term “conscious” to mean both. But some AIs are clearly conscious in access mode even though we doubt that they are conscious in phenomenal mode. An example is a self-driving car or a robot. Such devices detect and respond to their environment. They can make decisions, for instance whether to stop or proceed or to slow down or speed up. They can speak; think of Amazon’s Alexa and Google’s Assistant. But we doubt that the world appears to them phenomenally in any way at all.

And that’s the issue at hand, whether AIs can be truly phenomenally conscious. Are they — can they be — conscious of things in phenomenal mode, as we are when we are awake and alert? Does a world appear to them as it does to us?

The problem is that we can tell only inferentially. As the author says, whether something is sentient is “not something easily measured.” We have no direct access to someone else’s mind, let alone the mind of an AI. Hence, we can only ascertain the presence of mind from behavior.

Unfortunately, the author cites as evidence researchers who seem to conflate sentience, the ability to be phenomenally conscious, with intelligence.

“A conscious organism — like a person or a dog or other animals — can learn something in one context and learn something else in another context and then put the two things together to do something in a novel context they have never experienced before,” Dr. [Colin] Allen of the University of Pittsburgh said. “This technology is nowhere close to doing that.”

Alison Gopnik, a professor of psychology who is part of the A.I. research group at the University of California, Berkeley, agreed. “The computational capacities of current A.I. like the large language models,” she said, “don’t make it any more likely that they are sentient than that rocks or other machines are.”

The argument seems to be that current AIs are too dumb to be conscious. But that doesn’t follow at all. Plenty of conscious organisms are quite stupid.(4) Lack of intelligence is no evidence for lack of ability to be conscious, nor for failure to actually be conscious at any given time.

More likely, although not stated explicitly, is that the author and the researchers he cites are a bit narrow minded. They think that only living organisms can be phenomenally conscious and arrangements of silicon and metal obviously can’t. But that is just prejudice. We think other people are conscious because they are like us, and we know that we are conscious. Animals such as dogs and cats appear to be conscious because they behave like us and we can imagine inhabiting their point of view and seeing the world as they do. There are certain limits to such imagination, of course. Can we imagine how the world would appear to a bat?(5) Or an amoeba? It is even harder to imagine being an AI. But reality is not limited to what we can imagine.

A more plausible reason for doubting AI sentience, not mentioned by the author of the New York Times piece, is that a certain complexity of material substrate — neurons and brain cells in living beings — seems to be required for an organism to be conscious. And the more complex that substrate, the more vivid and intense the world appears to the organism and the more intelligent is its repertoire of behavior. On this view, because AIs lack such a complex substrate of living cells they can’t be conscious. But perhaps such complexity can be mirrored in non-living form. Perhaps it’s not the nature of the material substrate that counts but the complex patterns embodied in the substrate.

The authors disparage science fiction and accuse AI zealots of failing to distinguish science fiction from reality. But science fiction has much to offer. Consider the novels of Iain Banks known as the Culture Series. They are set in a utopian, post-scarcity society of humans, humanoid aliens and advanced superintelligent AIs living in artificial habitats spread across the galaxy.(6) The non-human AIs are characters in the stories as much as the humans are. They give every indication of being not only highly intelligent but also quite conscious of their world.

If such a world were to come to pass, we could distinguish between AIs and humans only by means of their appearance, not by whether they were sentient or not. That time has not yet come, but it may well be on its way. We may agree that today’s AIs most likely aren’t sentient, but we have no infallible way to decide for sure. And we certainly can’t be sure that future AIs won’t be. A bit of humility is called for here, as well as a sense of wonder, which underlies science fiction and philosophy both.

 


 

Notes

(1) Metz, “A.I. Is Not Sentient.” All quotations unless otherwise cited are from this article.

(2) The English language has two terms that mean roughly the same thing, “conscious” and “aware.” The former is from a Latin root, and the latter is from Old Saxon. (See Dictionary.com, “Conscious” and “Aware.”) Many other languages have only one: “bewusst” in German and “consciente” in Spanish, for instance. The two English terms are interchangeable. The only exception to using them interchangeably is that sometimes “aware” connotes being informed or cognizant in a way that “conscious” does not. If you want to say that someone knows the rules, “She is aware of the rules” sounds better than “She is conscious of the rules.” But that is not the meaning in the sentence quoted.

(3) Block, “On a confusion about a function of consciousness.”

(4) Unattributed, “Top 10 Dumbest Animals in the World.”

(5) Nagel, “What Is It Like To Be A Bat?”

(6) Wikipedia, “Culture series.”

 

References

Block, Ned. “On a confusion about a function of consciousness.” Behavioral And Brain Sciences (1995) vol. 18, pp. 227-287. Online publication https://www.nedblock.us/papers/1995_Function.pdf as of 6 August 2022..

Dictionary.com. “Aware.” Online publication http://www.dictionary.com/browse/aware, as of 4 May 2016.

Dictionary.com. “Conscious.” Online publication http://www.dictionary.com/browse/conscious, as of 4 May 2016.

Metz, Cade. “A.I. Is Not Sentient. Why Do People Say It Is?” Online publication
https://www.nytimes.com/2022/08/05/technology/ai-sentient-google.html as of 6 August 2022. If that link doesn’t work, try this one instead:
https://www.nytimes.com/2022/08/05/technology/ai-sentient-google.html?unlocked_article_code=AAAAAAAAAAAAAAAACEIPuomT1JKd6J17Vw1cRCfTTMQmqxCdw_PIxftm3iWka3DFDm4fiPgYCIiG_EPKarskbtp2xDmdWN5MNqNqS_t1wetSeUxxTg3i6r21pKM4GQRn44SiQjFxmJvXQbEz9TKtZWC3dr0jlLf65hjXPX3tWaTbzSYrcw16pJVmJQn73yNfxaaWAfc1joclpYopA5l8RzkEYjDb_KW7TkUjZ6jVK03U-QI0WOpGWDDMnNH6674IcwVaC12uX2ooqC9nq4saYIVLSf65ex0we8P-gqETAnhqKOqqA54xT4vVkNZ6oP65_uCA6uRBZWYDCbN5PK4&smid=url-share.

Nagel, Thomas. “What Is It Like To Be A Bat?” The Philosophical Review, Vol. 83, No. 4 (Oct., 1974), pp. 435-450. Online publication http://www.jstor.org/stable/2183914 as of 29 April 2015.

Unattributed. “Top 10 Dumbest Animals in the World.” Online publication https://www.animalsaroundtheglobe.com/top-10-dumbest-animals-in-the-world/ as of 6 August 2022.

Wikipedia. “Culture series.” Online publication https://en.wikipedia.org/wiki/Culture_series as of 6 August 2022.

From → Philosophy

9 Comments
  1. Warren permalink

    You seem to have missed what I think is the most salient distinction between consciousness and sentience; sentience requires SELF consciousness or SELF awareness.

  2. William Price permalink

    Another point: Because an entity can sense is one thing.

    But is it aware that it is aware? That is the larger issue.

  3. Bugle permalink

    Thank you. Interesting.

    Would respectfully submit “demonstrably self-aware” as a criterion.

  4. Parmenides permalink

    Sentience of AI is a very fascinating, and highly current, topic, so I’m glad you took it on.

    I was surprised to find no mention of the recent episode in which a Google researcher was fired (or perhaps not officially “fired”, but anyway terminated) for his public essay suggesting that the “large language model AI” that Google has developed is sentient. You should google up that essay and comment on it! As well as Google’s official refutation issued by top management. I was astonished at some parts of that essay, which reveal that the AI in question may be very, very close to passing the Turing test, if not already capable of doing so. Apparently the reason you and I can’t play with that AI is that Google has a problem with it learning unwelcome things from using the whole Internet as input–there’s a lot of hate speech there and it lacks a “moral compass” !

    (That’s enough for one of your philosophy blogs right there!)

    Thanks for introducing me to Ned Block’s concepts of “phenomenal” and “access” consciousness . I’d like to see those concepts used to analyze what happens to consciousness in Alzheimer’s disease, where perhaps one could say that access consciousness gradually disintegrates leaving only phenomenal consciousness. (?)

    Note that one reason Alexa appears so stupid sometimes is that she doesn’t remember anything. Life for her is only one question at a time. She reminds me of Searle’s Chinese room, but she’s so fast at the lookup of answers that it makes a good imitation. Alexa has Alzheimer’s. But the large-language-model AI at Google doesn’t have that problem. Its problem is quite different–it lacks all discrimination. It’s a precocious genius of a baby without parents in a nasty and dangerous world.

    And here’s another observation on the subject, due to John McCarthy: he always insisted that a thermostat possesses an elemental consciousness, because it senses the temperature, and makes a decision based on that. So he would have said, it possesses phenomenal consciousness (it senses the temperature) and it possesses access consciousness (based on its internal model of what the temperature should be, it can turn the furnace off or on). But nobody in the world (besides John) ever believed a thermostat is conscious. So, you philosophers haven’t yet got the right definition. You need a definition that admits humans and dogs but not thermostats. Then we can test whether it also admits Google’s AI or not. And then we go down the scale of complexity of life: I guess nobody doubts that a spider is conscious? But what about bacteria? And it is a matter of philosophical dispute, is it not, whether a virus is even alive? let alone conscious. In pondering that question I find myself wondering whether bacteria and viruses make decisions. Is decision-making important to consciousness? Note that the thermostat does fine on that test.

    In short: I don’t mean to criticize what you wrote, as it was coherent, relevant, and cogent–but it’s a big and difficult subject, so even though it was courageous of you to tackle it, you’ve only scratched the surface, so I urge you to devote at least one more blog to going deeper.

    • Thanks for your comments. I think the thermostat could be said to be conscious in access mode, but not in phenomenal mode. It’s hard to imagine how a world would appear to a thermostat.

  5. Brooks permalink

    Bill,
    what an elegant discussion. i appreciate its open-mindedness, especially considering the Times piece.

  6. Kat permalink

    A very thought provoking piece and a bit mind-bending as well for a Sunday afternoon. I enjoyed reading this and thinking about the future. I have not read the Culture Series but will also look for that.

  7. Jim M permalink

    I checked out your post, most of the time it’s a bit over my head. Computers can do certain computations way faster than people. Google feeds me things I appear to like by what I click on, then feeds me more via youtube etc. I think computers can do whatever people program them to do.

    When I observe any animal, tame or wild, it is clear to me they have likes, dislikes, affection, disinterest etc, just like people. They have boundaries, comfort zones etc. Animals are probably not thinking philosophy but they are being. I don’t think computers will ever do that, they can emulate being or feeling while not really being and not really feeling as an life animated organism.

    Blade Runner 2049 and the original Blade Runner were about created replicants who also had likes and dislikes, making them “sentient” the derivation of “sentient” is “feeling”. Not sure an AI could feel or perceive via sensors etc like humans and living things do. Even plants apparently have feelings of a sort.

    Maybe if our planet was swept by some life ending phenomena and you and I accept that we are spiritual beings, if some very advanced computers were still working I doubt a being might be able to become a “ghost in the machine” to have something to do, maybe try and fix things, since there would be no biological vessels to be born into. That being would have to be exceptionally able to manipulate all those 1 and 0’s.

    The word “being” is interesting, is an AI “being”, I think it’s just running commands. It’s just on or off. Correctly programed they are amazing calculators, if this happens; do this, if that happens; do that.

    I like this 1828 Websters Dictionary definition
    * https://webstersdictionary1828.com/Dictionary/being
    * https://webstersdictionary1828.com/Dictionary/sentient
    I use this neat site that searches all the online dictionaries from one page. Definitions abound. http://www.onelook.com

    As advanced as we think we are I am always amazed at the earlier definitions and lost knowledge we seem to have thrown off.

    Have a great day!

  8. Danny S permalink

    I expect many Christians will find it difficult to accept robots as fellow believers worthy of baptism and fellowship.

Leave a Reply

Note: XHTML is allowed. Your email address will never be published.

Subscribe to this comment feed via RSS