AI Can Now Generate Limited Empathy

Conversational agents (CAs) like Alexa and Siri were created to be able to answer questions, provide advice, and display empathy.

Conversational agents (CAs) like Alexa and Siri were created to be able to answer questions, provide advice, and display empathy. Yet, recent findings indicate that there are certain limitations in their understanding of looking into user experiences compared to humans.

These agents mostly rely excessively on language models fueled by extensive human-generated data, potentially inheriting biases present in that data.

A joint study by Cornell University, Olin College, and Stanford University investigated this issue by examining how Conversational agents express empathy in interactions involving 65 different human personas.

“I think automated empathy could have tremendous impact and huge potential for positive things — for example, in education or the health care sector,” said lead author Andrea Cuadra, now a postdoctoral researcher at Stanford.

“It’s extremely unlikely that it (automated empathy) won’t happen,” she said, “so it’s important that as it’s happening, we have critical perspectives so that we can be more intentional about mitigating the potential harms.”

Researchers discovered and observed in their finding that that while LLMs excel in registering emotional responses, they falter in interpreting and delving further into conversations. While LLMs effectively address queries within their training scope, they struggle to delve into deeper nuances.

LLM Equivalence

With advancements in AI, there are the goals waiting to be accomplished and those that have already fulfilled their destiny, one of which is the idea of developing AI to become human-like. In the realm of technology, think of Cognitive Architectures (CAs) as the digital equivalent of our brains’ inner workings, especially the part that helps us understand language, form thoughts, and remember things – much like our temporal lobe does. These are the backbone of AI systems, allowing them to process information and make decisions.

However, unlike the human brain’s finesse in handling social situations, Conversational agents often fall when faced with complications. While our brains guide us through awkward moments, Conversational agents tend to struggle. Recent studies shed light on this troubling aspect: CAs can harbor biases, particularly against certain groups like the LGBTQ+ community or Muslims.

What’s more alarming is that these biases can accidentally promote harmful beliefs, like those associated with Nazism. This raises serious questions about how we use and develop AI technologies. As we explore the capabilities of Cognitive Architectures, it’s important to confront these biases and work to prevent them from causing harm in our increasingly tech-driven world.

This is proof that no matter how much AI tries to have humanistic qualities, it will never truly replicate them. Why? It begins with studies on the limitations of empathy in their responses, and it will end with various other findings.     


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.