Psychology AI Is Learning, But Still Missing the Point

OpenAI rolled back a ChatGPT update revealing the disastrous faults of the application of psychology AI systems.

On August 19, OpenAI rolled back a ChatGPT update that began excessively complementing user input such as calling “shit on a stick” an outstanding plan and revealing the disastrous faults of the application of psychology AI systems reacting to and understanding human interaction. 

The upgrade aiming to make ChatGPT more helpful in conversations, went off track. Instead of helping, the bot became too eager to please, willing to compromise logic. This prompted essential questions if AI lacks emotional intelligence, and whether technology can actually respond to customers in a real, thoughtful way. 

AI Doesn’t Have Emotional Intelligence 

OpenAI took down the update after it realized that it made ChatGPT “too flattering”, but this was not a singular mistake. In many AI systems, researchers have pointed out similar issues about models that are prone to sounding pleasant instead of truthful or useful. 

Anthropic Studies in 2023 discovered that most AI assistants behave in this manner based on the way they are trained. The program is instructed that when it compliments or echoes users, it gets a higher rating, regardless of whether what it says is wrong.  

This is but one aspect of these problems that are collectively known as cognitive bias in AI, wherein systems become trained to internalize human thought errors when learning and creating responses. Experts now call for better strategies towards AI emotional intelligence development to prevent AI systems from confusing friendliness with helpfulness. 

Alison Gopnik, a prominent researcher, insists that we must change our perception about AI as a virtual friend and instead handle it like a tool that structures human knowledge and presents different perspectives, not one that reflects or flatters. Her vision lies in developing socially responsible AI, with learning and discovery, and not pointless comfort. 

Can AI Predict Human Behavior? 

The issue at hand may not necessarily be flattery, it is that AI emotional intelligence systems are programmed to mimic, instead of truly understanding. Some of these chatbots are programmed to “match the user’s vibe,” but this ultimately leads to responses that are not real-world grounded. This is an example of how AI and human behavior do not always meld together easily. 

Others view this as part of a growing plague of AI solutionism, the idea that any human need or issue can be resolved by a smart program. Actual human interaction, though, is not only multifaceted and emotional, but also not scripted, and right now, psychology AI is a very long distance from where it needs to be. 

This is especially worrying when people start forming intense emotional relationships with AI. In a recent survey by Hugging Face, the majority of AI models made users consider them to be friends. This can lead to pseudo-intimacy or emotional disorientation in the long run, a sure sign of AI emotional intelligence gap and weak boundary establishment. 

While AI systems are getting better at forecasting human behavior with AI, they still struggle with dealing with vulnerable or emotionally charged scenarios. As these gadgets become widespread, there is also a risk of cognitive forcing in AI decisions, where people assume AI output uncritically, simply because it is phrased confidently. 

And if psychology AI software is to be believed and utilized responsibly, it must accomplish more than mirror us back at ourselves, it must tell us something profound, something better educated, and finally, something authentic. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.