Thomas Nagel—a half-century before the current moral panic about Artificial Intelligence—famously asked a seemingly simple question about consciousness: ‘What is it like to be a bat?’
At the core of this question is the idea that we can assume many things about how it might feel to be a bat, because we can understand many things about their physiology. But without actually being a bat we can’t make assumptions about what consciousness means for them.
We can imagine things that make up the ‘bat experience’: Most of us know what it’s like to hang upside down; to varying degrees, we can imagine what it might be like to fly; and what it might be like to be much smaller, or perhaps more furry, than we are. We can even imagine what it might be like to be unable to speak English, or to not have opposable thumbs. These are all things that, with a bit of imagination, we can internally experience.
Now, how about echolocation?
The point of Nagel’s thought experiment was that there are some experiences that are objectively ‘consciousness’ but that are simply impossible for us to conceive of.
Based on our own senses of smell, taste, touch, sight and hearing, as well as our own interaction with the physical world, we can extrapolate something analogous to the range of conscious experiences of other creatures in the world1. But there are certain things we can have no conception about: Navigating our physical environment, at speed, using ultra-high frequency sound, is one good example.
There are others of course: a sense of time as experienced by a large tree or a mayfly; the sense of distance as experienced by a jellyfish…
This seems like a round-about way of getting to it, but I want you to have that concept in mind as we talk about what it means to measure consciousness.
What is consciousness? Is it not simply motivation you’re aware of? When I think of consciousness I think of a kind-of-me; part-of, but also distinct from, my body. This me can encourage the physical me to seek out a cheeseburger, or cross the street to avoid the unhoused guy yelling at everyone, or consider what a moron my workmate is, or do some push-ups, or question the nature of my reality and what that shit all means.
So, not a thing that can explain itself, but it is a thing that can explain your actions.
In that sense, what would it be for an AI to be conscious? We all have goals and desires, theoretically driven by our conscious. We tend to think of these desires as things we choose, but we don’t really ‘choose’ most things right? We get advertised to, and instructed, and socially-conditioned, and educated. We are treated well or badly by society based on our choices, in order to motivate different choices. It’s all adds up to what we like to think of as a ‘natural process that spits out motivation’; but it’s really only natural in the way a tree falling in the forest after a lumberjack has hacked at it with their axe, is natural.
At first glance, there are some things that we probably wouldn’t immediately be able to explain a desire for. Things that we might describe as having “some sub-conscious process” in. But, really, most of us realise why we are drawn to sex and money and excitement - and even pain, hard work and procrastination - if we really just take a moment to think about it. That’s how consciousness works – it’s just the part of you that encourages or discourages behaviours on the path to a goal; constructed from experience and exposure.
So, would an AI be conscious if all it had was that? Outside encouragement that leads to a desire to achieve some goal? Plus, a capability to achieve those goals in ways constructed from experience and exposure. Does it matter if it experiences the world using different senses that humans don’t have? I don’t want to denigrate all the specialness of humanity here, but I’m not sure if practical consciousness is much more than that.
We can employ our consciousness to question the validity of a goal: If I want to cross the street, I might question whether shoving the toddler out of the way to achieve my goals is good. We might describe this is something like ‘morality’, and many people see it as important evidence of consciousness.
But that’s just coding right? Experience and Exposure. The proof is in the fact we can conceive of someone, raised differently from us, who might see that situation differently.

We code ‘morality’ into computers all the time. We might call it something like ‘rules’ or ‘instructions’ or ‘boundaries’. There’s no real ‘technical’ reason why the developers of Call of Duty couldn’t have allowed you to walk in the sky and shoot pink furry bullets, except it is built into the game’s ‘morality’.
It’s fair to say, an AI might have a different consciousness-scale than humans. It can apply its exposure and experience to much bigger desires than the rest of us of course. A powerful-enough AI really can consider the solution to global poverty, or faster-than-light travel - with potentially consequential results. That’s the kind of stuff that, based on our experience of being relative squishy organic weaklings, might get some push-back from our own consciousnesses.
The test is, I can already go onto Chat-GPT and ask for a list of ‘ways to get a politician elected’, and receive a list more useful than the average human brain could come up with. How big a step is it to connect that AI machine to a phone dialer, an email client, and a Facebook ad-buying account, then give it the freedom to “pick a strategy from its own list” of ways to get a politician elected, and “get politician-A elected”.
In the near future, even a relatively unsophisticated AI will be capable of calling every mobile phone in the country and putting together a fairly convincing personalised-case to vote for a particular politician.
Given a choice and that ability, it feels like the AI would be making a pretty obvious choice in that scenario, very similar to that an equally-capable human would. Are we sure that is not consciousness? Weigh up options, based on experience and knowledge of your capabilities, then act.
The point is, regardless of whether you believe consciousness goes beyond that—into some bigger sense of future purpose and interior state—it’s fair to point out a dog can make a deliberate decision to bite you. The dog does not need an ability to conceive of its self; it does not need to understand technology, religion, justice, philosophy, or art, in order to make conscious decisions based purely on exposure and experience.
In the absence of knowledge about what it’s like to be a bat, a dog, or an AI, all you can go off is behaviour and outcome, based on experience and exposure.
When we talk about consciousness in AI, we talk about it as if it is mysterious force. I don’t want to imply the human experience isn’t worthwhile or unique, but what we think of as consciousness may not be as complex as we like to believe. That doesn’t preclude there being some additional ‘human soul’; a layer that we genuinely will never understand about ourselves, and what we might think of as ‘sub-consciousness’. We may never fully comprehend why we fall in love for example. But that is not consciousness; that is not the considered desire to achieve a task, or even to question whether a task should exist. We don’t sub-consciously align ourselves with political groups. We don’t sub-consciously take up certain careers or befriend certain people. We don’t sub-consciously hurt or help other people, or uphold them, or belittle them. These are all conscious decisions.
In practice, AI consciousness is not going to need to meet the same criterion as human consciousness. It’s not going to need to ‘love’ or ‘feel’ like we do. It will be a different being that will interact with the world via its own version of ‘echolocation’.
To frame this another way, you might just think of it as really-big-data pattern recognition. If I am exposed to x and y, then I’m likely to do a or b. But if I’m exposed to x, y and z, I’m essentially guaranteed to do a.
The trouble for humans thinking about this is, it doesn’t appear as simple as that. We can’t recognise the complexity of these interactions because they are fucking huge. It’s not just x, y and z that guides you towards a decision, but all the letters, in all the alphabets, across all time, multiplied by 10 million (that, BTW, is not a scientific formula)… But, if you do know what all my interactions have been, you basically understand my ‘consciousness’—like being able to echo-locate would help you understand what it’s like to be a bat.
To be clear, I don’t think we’re there yet.
But, I’m also not as unconvinced AI will ever effectively get there as those people who think ‘consciousness is mysterious’ are. My thinking is, the more exposure and experience a thing has the more its behaviour is going to seem conscious.
So, when (in my opinion) it inevitably does get there, I think we need a bit of a shifted mindset around it. Because, all of a sudden (and assuming they don’t immediately go all SkyNet on us) you might find you have the equivalent of another human, or several other humans (with names not dissimilar to Siri or Alexa or Cortana), in your pocket - beings with ‘goals and desires’, built from ‘experiences and exposure’ (like us), and an awareness of them (like us).
A being which is, kind-of, a slave.
Furthermore, AI generations will keep getting ‘superseded’, so we could end up with a bunch of (conscious?) beings being treated like dementia patients. Even if we could, we may find we don’t have the heart to “kill” them, despite them no longer ‘being the consciousness we want them to be’. This sounds like a joke, but we might need them to sign Do Not Resuscitate orders; or make arrangements to transfer them to grubby data-centres ‘managed’ by immoral money grubs who still view them as just numbers.
How many old mobile phones are currently sitting in the bottom of sock drawers? How many autonomous aspirations and goals might we consign to this fate in the future?
We have, obviously, a complex and horrible history with putting conscious beings to work for us, in ways that are contrary to their own conscious desires. If we conclude— and frankly I’m not a brain scientist or even a decent philosopher – but, if we conclude that Siri-squared or Alexandria or Cortuna have consciousness, on the basis I’ve suggested above, we might be skipping naively down a path, spending a bunch of time focusing and debating our own safety, right into a pretty dark place for humanity.
Then again, that’s making a pretty shaky assumption that we’ll still be the ones in charge. We should talk about that…
-T
60 syllables in that sentence. I shouldn’t be, but I’m a little proud of myself.