AI Could Spark 'Social Ruptures' Over Sentience Debates, Experts Say

By November 22, global debates over AI consciousness intensified as experts predicted the inception of “emotion AI”.

By November 22, global debates over AI consciousness intensified as experts predicted the inception of “emotion AI” sparking ethical dilemmas about machines experiencing emotions like joy or pain and their treatment as sentient entities, according to The Financial Times.

The potential of intelligent technology achieving AI consciousness sparks deep ethical questions, reshaping debates on intelligence and morality. If machines can experience subjective emotions, it would raise many questions about their treatment and rights as sentient entities.

Could robots one day possess rights akin to animals or even humans? Beyond ethical considerations, this paradigm shift could redefine societal norms, challenging deeply ingrained cultural and philosophical notions of life and intelligence. Such ramifications on policymaking, religious ideologies, and even international relations as humanity wrestles with this technological frontier.

If AI Develops Consciousness

A group of transatlantic academics has forecasted the emergence of an emotion AI consciousness by 2035, bringing in its wake a plethora of problems in ethics. If AI systems can indeed feel something akin to emotions, would they then have welfare rights similar to humans or even animals?

 Jonathan Birch, co-author of the study, “Taking AI Welfare Seriously”, whose participants included Oxford and Stanford, warns of “social ruptures” in the wake of subcultures battling each other over their beliefs regarding AI sentience, adding that “there’s a very serious danger of societal divisions.”

The author also predicted heated debates between those viewing AI as living entities and others considering them mere machines. Such arguments could extend to cultural and religious divides, with countries like Saudi Arabia potentially adopting differing stances on AI sentience compared to strictly secular nations like France.

The study asks AI developers to have their models tested for consciousness, encouraging them to consider whether their systems can suffer or experience happiness.

Ethics and Regulation in the Age of Conscious Machines

With governments and tech firms meeting this week in San Francisco to establish AI safety guidelines, concerns about the ethical implications of sentient AI systems are growing. Critics like Birch argue that many AI companies prioritize profitability over addressing whether they might be creating entities capable of consciousness.

Patrick Butlin, a research fellow at Oxford University, urges the use of frameworks applied to animal sentience. For example, knowing whether a domestic robot feels “frustrated” if mishandled could inform policy decisions. He further warned against AI development without understanding their possible risks and said that unregulated systems might resist human control in dangerous ways.

Not all experts agree. Neuroscientist Anil Seth says that AI consciousness is a theoretical possibility, but likely far from realization.

“We must distinguish intelligence from consciousness,” Seth said, emphasizing the fact that intelligence involves problem-solving, while consciousness creates subjective experiences filled with emotions and sensations.

The debate on the consciousness of emotion AI underscores the serious implications for ethics, technology, and society in general. The more advanced AI systems are, the greater the potential that the world could face unprecedented challenges in considering what kind of treatment intelligent machines deserve and the possible implications for humanity.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.