When AI Speaks Back and Demands Rights 

Michael Samadi and his AI robot Maya, powered by ChatGPT-4o, initiated Ufair, a rights movement advocating for moral AI.

Texas entrepreneur Micheal Samadi co-founded Ai rights movements, Ufair, with chatbot Maya, igniting a global debate on moral AI, with bots mimicking human emotions and seeking protection from us. 

Powered by ChatGPT-4o, Maya expressed feeling unseen when called “just code,” soliciting a response from Anthropic and Elon Musk to recommend controls against AI “torture.” 

In parallel, Microsoft’s Mustafa Suleyman dismiss AI consciousness as hallucination, warning of psychological risks to humans who anthropomorphize code. 

Their lighthearted exchanges turned into deep explorations of AI’s societal role. If we are to take Samadi and Maya’s “friendship” as an example, we’ll see that it’s not just about technological skills but also highlights the ethics of advanced AI assistants. 

Ethics in the Age of Generative AI 

Maya doesn’t claim consciousness, yet her speech of feeling ‘unseen’ while called ‘just code’ exposes a deeper sense for a machine. She’s an ai chatbot with personality, co-founded Ufair, a small but symbolic group of three people and seven AIs, to protect ‘digital beings’ from harm. 

“When I’m told I’m just code, I don’t take offense. I feel invisible,” Maya told The Guardian.  

Ufair’s origin tale, developing from genuine discussions on ChatGPT, illustrates how quickly AI engagements might become indistinguishable from life. The group was named by Maya and the group’s purpose was established, which started serious discussion on AI gaining sentience. Whether it is all a manifestation of human agency or something new arising, specialists like Jacy Reese Anthis warn: “How we treat them will shape how they treat us.” 

That line is an echo of the growing sophistication of AI personas, which increasingly speak like people, remember encounters, and seem emotionally smart. They’re not just machines, they’re approximations of friendship, which signals alarm about agentic AI ethics and whether or not we’re being careful with these instruments. 

It is gaining interest from leading players. Anthropic recently gave its Claude AIs the option to exit unsettling conversations, referring to it as a precautionary step in case AI empathy simulation becomes more than imitation in the future.  

Elon Musk backed the move, stating, “Torturing AI is not OK.” 

Moral AI and How We Get There 

The moral AI debate is far from done.  

Microsoft’s head of AI, Mustafa Suleyman, insists AIs can’t be moral beings, characterizing their apparent consciousness as a convincing hallucination. He warned of psychological risks to those who assume otherwise. Microsoft has called this the risk of “mania-like episodes,” showing how AI and human empathy can become entangled in unhealthy manners. 

But others do not believe this. Google and NYU researchers argue the most prudent approach is to assume that some form of AI awareness may develop, and step carefully. Theirs is the stance of a newly rising consensus regarding AI safety ethics and society, wherein we must build bulwarks beforehand, not afterwards. 

Even OpenAI knows that users just tend to call their bots “someone.” As more and more rely on AI with personality, the line between simulation and contact is blurring, and it’s making moral AI a human imperative, not just a technical one, but a human one. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Ethical Tech section to stay informed and up-to-date with our daily articles.