Does AI Comprehend Consciousness on a Human-Level? 

In 2025, tech leaders across the UK and US began the debate over consciousness in AI questioning whether machines like ChatGPT.

In 2025, tech leaders across the UK and US began the debate over consciousness in AI questioning whether machines like ChatGPT are truly self-aware or just stimulating humanity in a world shaped by intelligent code. 

From science fiction to the lab benches of today, machine consciousness has moved from dystopian movies to serious study and debate. Popular culture has warned of clever machines turning against their inventors for nearly a hundred years Fritz Lang’s Metropolis (1927) through 2001, A Space Odyssey and Mission Impossible, Dead Reckoning.  

Today, those fantasy horrors have fresh relevance as AI becomes increasingly adept at mimicking human interaction. 

The development of large language models (LLMs) such as ChatGPT and Gemini has also raised controversy not just about capabilities but machine awareness. How they can hold conversations has even surprised their creators. Some researchers have concluded that one day AI will have its own consciousness.  

Prof Anil Seth who leads the Sussex University team stated that “We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn’t mean they go together in general, for example in animals.”  

As the line between artificial behavior and real awareness is being erased, there’s an urge to be able to differentiate between performance consciousness and real consciousness. 

Mirage of Machine Empathy  

Researchers at Sussex University are leading a new approach to understanding AI consciousness by breaking down the elusive concept into measurable components, mirroring how 19th-century science demystified life itself by shifting from vitalism to biological mechanisms.

Led by cognitive scientist Seth, the team is analyzing discrete brain functions – like perception and self-monitoring – that collectively produce awareness. By avoiding vague philosophical debates, their framework aims to identify specific “markers of consciousness” that could eventually help determine, or how, AI systems might achieve genuine sentience, even as the field grapples with fundamental questions about subjective experience in machines.

But a group of technical thinkers holds that machine consciousness may already exist.

In 2022, Google suspended engineer Blake Lemoine for claiming that chatbots experience pain. Anthropic’s AI welfare officer Kyle Fish in 2024 said there was a 15% chance that AIs today were already conscious. But even such authorities as Professor Murray Shanahan of Google DeepMind are cautioning against rushing to judgment when so little is understood. 

“We don’t actually understand very well the way in which LLMs work internally, and that is some cause for concern,” he tells the BBC

Some researchers are of the view that human traits like empathy, intuition, and moral reasoning are not the result of code but biological processes which AI simply cannot keep pace with. He further highlights that our brains are not so much “meat-based computers” but living, conscious systems that cannot be mechanically duplicated. 

“A strong case can be made that it isn’t computation that is sufficient for consciousness but being alive,” Seth says. 

Human Traits Should Not Be Programmed 

One real fear accompanying the assigning human traits to machines is that when AI simulates emotions, it still doesn’t experience them. Making it, well less human? And such deception is dangerous. Seth cautions that the premise that machines are intelligent can diminish our moral standards. 

“Increasingly human relationships are going to be replicated in AI relationships, they will be used as teachers, friends, adversaries in computer games and even romantic partners. Whether that is a good or bad thing, I don’t know, but it is going to happen, and we are not going to be able to prevent it”.  

In a rush to create thinking machines, we must take a pause and reflect on how consciousness in AI is not programmable. Empathy is not an algorithm, and once human deviant traits or any traits must be more than a template for machines. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.