Researchers' Zone:

Is there someone behind the screen? Researchers are divided on AI consciousness
No researcher seriously believes that AI is conscious yet. But they are deeply divided on whether it could be.
It's 2032. I've just asked my household robot to make me a sandwich. It walks toward the kitchen as it would any other day, but something's different this time.
Instead of its usual efficient self, it's asking for breaks and »me time«. According to the latest update notification, my robot has gained consciousness.
Of course, this scenario is absurd. We won't simply download consciousness into our machines. If artificial consciousness emerges, it will likely happen gradually and unexpectedly. If it happens at all.
The current debate is more about whether we expect it to ever have consciousness.
I am one of the researchers who’s skeptical about the possibility of machine consciousness, but I am still fascinated by the topic. In this article, I explain why researchers are so divided on this topic.
How could it be possible?
The case for AI consciousness starts with a simple fact: we know surprisingly little about how AIs like ChatGPT actually work.
While humans designed these AIs, what happens inside them is often a mystery, because of their internal complexity. This is like building a huge maze with hundreds of entrances and exits and sending a mouse through.
We can see where it comes out, but not which path it took. And because we don’t know that, we cannot rule out possible paths and what happened on the way.
Similarly, if we don't fully understand how AI works, we simply can't rule out the probability of consciousness, however small it may be.
Several theoretical arguments are in favor of AI consciousness. Two of the basic ones are called »substrate independence« and »computational functionalism«.
The former is the idea that things like consciousness could exist on any material or substrate, whether it's in a biological brain or a computer.
Just like a video game works on different consoles, or how both dogs and humans can smell things despite having very different brains, maybe consciousness could work on different »hardware« too.
The latter means that everything that happens in the brain can be described as a computation and hence modelled by a computer. This idea was reinforced when AIs started being as good at conversing as humans. If they can do language, why not also consciousness?
From brains to computers
Some discoveries in neuroscience could also help with the quest to discover AI consciousness.
For example, researchers found that consciousness could potentially be connected to the prefrontal cortex, in both humans and animals.
Others found that consciousness involves interactions between various brain regions, notably the frontal and parietal cortices, and the thalamus (which is a center in the midbrain that processes sensory information).
Surely, these are for now just hypotheses and highly debated. Yet, it brings about the hope that if we could find something similar in AI systems, we might find consciousness there too.
Lastly, even if it seems unlikely, it's worth thinking about AI consciousness. We don’t want to risk having a system that’s able to feel pain and not acknowledging it.
Or worse, a system having (dangerous) desires and not recognizing them. After all, throughout history, humans have often horribly failed to recognize consciousness in other beings, from dolphins to dogs and even other people.
Why is it not possible?
On the other hand, researchers argue that a computer system running on 0s and 1s simply cannot have real experiences like smell, taste, or pain.
These people reject ‘substrate independence’ and ‘computational functionalism’ and say that while machines might look like they're doing the same things as humans, in fact, they’re doing it in a fundamentally different way.
Just like how a video game seems like it can run on all types of different consoles but actually has to be specifically made for each different one to work properly.
These people think that there is something special about biological substrates that we might not have discovered yet. Something that is simply not reproducible in computers. For example, some scientists think consciousness might emerge on the quantum level which, as of now, we simply can't recreate in computers.
Even others suggest that the whole concept of consciousness might be flawed in the first place. They say that consciousness itself is an illusion and hence completely unrealistic to create anyway.
How to test for consciousness
But even if consciousness exists (which the vast majority of researchers believe) and could be modelled computationally, testing for it would be extremely hard. For this, there are two main approaches: theoretical tests and behavioural ones.
Theoretical tests, which means coming up with theories of how existing conscious beings like humans work and then applying these theories to AI, are unreliable because everyone defines consciousness differently.
Under some criteria, AI would already be a consciousness candidate, under others, it would never even come close.
Behavioral tests aren't much better because AI systems are specifically trained to act human-like. When an AI says it has feelings or feels pain, it might just be repeating things it's seen in its training data or was very much prompted to do so.
The ELIZA effect
A perfect example of how easily we can be fooled is something called the ELIZA effect. In the early days of AI, there was a very simple program that just took what people said and turned it into questions.
If someone said »I feel sad,« it would ask »Why do you feel sad?« People thought this program really understood them, even though it was just doing simple word replacements.
This shows that while we might sometimes fail to recognize consciousness in complex beings like animals, we're also way too quick to see it in relatively simple machines.
Finally, there's the biggest problem: we simply don't know what consciousness is in humans, that is, where it comes from, how it works, and why we have it.
And as long as we don’t know that, we are trying to find something in a dark room when we don't know what we’re looking for.
Will AI Ever Be Conscious?
A lot of the excitement about machine consciousness seems to come from our love of science fiction and new technology.
It's a cool idea that captures people's imagination. Or at least some people’s imagination. Interestingly, most of the people pushing hardest for this idea are men, particularly in the tech world.
The AI researcher Kate Crawford suggests this might be because some men are fascinated by the idea of creating life to overcome their own physiological limitations and thus challenge nature.
Long before AI, the German psychoanalyst Karen Horney described this phenomenon as “womb envy”.
They don’t need to feel anything
Another interesting question is: why would machines even develop consciousness? In nature, there's a clear reason why we evolved to feel things. Pain keeps us away from danger, and pleasure guides us toward things we need to survive.
But AI systems don't evolve like living things do. They're built by humans to perform a specific task: chess, language, or predicting protein structures. They don't need to feel anything to do their job.
Still, I believe we should always be careful not to hurt anything that might feel pain, whether it's a human, animal, or machine. Right now, however, searching for consciousness in machines seems like reaching into the dark.
If we can't even properly explain what makes living things conscious, how can we possibly find it in artificial machines?
If my future robot asks for a break before making my sandwich, sure, it’ll get its time off. We all need rest. But for now, that's science fiction.
The reality is that while AI can do amazing things, it's just a very sophisticated but task-specialized tool. And whether it will ever develop feelings might just be impossible to meaningfully study right now.