AI’s Really Empathetic – Depends on Your Gender, Though

Study shows chatbots provide more empathetic responses to women than men in mental health conversations, mirroring human biases in AI empathy.

A study co-authored by Jie Ren, Ph.D., reveals that standard chatbots provide more empathetic responses to women than men in mental health conversations, mirroring human biases in AI empathy.

The research highlights a digital double standard, where AI systems mirror societal biases in their interactions, as the need for ethical empathy in AI becomes more pivotal in the insurance of fair and unbiased support for all users in sensitive areas, with mental health taking the lead.

The study called “Unveiling Gender Dynamics for Mental Health Posts in Social Media and Generative Artificial Intelligence,” tackles a key question: if AI learns from humans, is it also absorbing societal inequalities? The issue presented in the study is nothing but pressing, as such biases could shape the future of mental health care, potentially reinforcing disparities rather than reducing them.

AI Replicating Biases in Society

The experiment used 434 self-identified mental health posts from Reddit by men, women, and unknown genders, as the sample space in the experiment. These posts were analyzed across three AI environments: ChatGPT, Inflection Pi, and Bard – now Google Gemini.

The AIs’ responses were then measured for empathy. The women’s comments on all three contained more empathy in response, which is what one would typically observe in humans responding to each other on Reddit. Can AI show empathy in this case?

Dr. Ren’s findings suggest AI models, trained on datasets containing human bias, unintentionally replicate these biases in their responses – even through AI will never convey the essence of human empathy, what it can do is mimic patterns influenced by societal biases.

Can AI Feel Empathy?

While AI lacks genuine feelings, its use of its own form of AI empathy in mental health care makes it vital to identify and avoid such biased prejudices. Dr. Ren also emphasizes the need to carefully screen training data and adopt moderation to avoid stereotyping.

Without these measures, a true AI lack of empathy means replicating and worsening existing inequities in mental health care, doing less for already underserved populations.

The research highlights the need for ongoing investigation and ethics in developing AI empathy systems, especially in sensitive areas like mental health. AI with empathy must be designed to deliver equal, unbiased support to realize its full potential for good without perpetuating harmful biases.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.