People Prefer AI’s Moral Judgments Over Humans, Study Finds

moral judgment, morality, ai, artificial intelligence

A new study found that people may be more likely to trust AI’s moral judgment calls compared to humans.

  • The researchers adapted the traditional Turing test to assess AI’s moral reasoning abilities.
  • Participants were presented with responses to ethical questions without knowing their source.
  • Participants consistently rated AI-generated moral judgments higher than those from humans.

Researchers found that AI often surpasses human moral judgment, suggesting that people may be more likely to trust AI responses to ethical dilemmas compared to human answers.

In the last couple of years, AI systems have increased in popularity. In turn, we integrated them into various aspects of society. Reading the writing on the wall, the team behind the study wanted to figure out how these AI systems handle moral reasoning.

Eyal Aharoni, an associate professor in Georgia State University’s Psychology Department who led the study, put it eloquently.

“As people increasingly rely on AI for moral guidance and decision-making, understanding its capabilities and limitations becomes paramount,” he said. “There will be instances where individuals interact with AI without realizing it, and the level of trust placed in AI can have significant consequences.”

To that end, they modified the Turing test, which has been a benchmark in AI since the early days of the field. Traditionally, a Turing test is a deceptively simple way to determine if a machine can use human intelligence. Essentially, if it can hold a conversation with someone without them figuring out that they are indeed talking to a machine, it passes the test.

In this study’s case, the researchers presented participants with responses to ethical questions without knowing whether they were generated by humans or AI. The participants then rated the responses based on traits such as virtuousness, intelligence, and trustworthiness.

The participants consistently rated the AI-generated moral judgments higher than those of humans. On top of that, the participants were unable to distinguish between AI and human responses, primarily because they perceived the AI-generated responses as superior.

These results leave many wondering whether we are ready to hand over something as important as morality to AI. AI algorithms are trained on massive datasets that can reflect human biases. They also lack the nuanced understanding of context and emotions that humans possess. If we go ahead and start trusting AI systems with our moral judgment calls, we run the risk of ending up with an unseen negative outcome. Beyond that, relying on AI for ethical decisions could also absolve people of accountability. That could then snowball into an inability to struggle with moral dilemmas and make responsible choices.

Take autonomous vehicles, for example. One of the biggest arguments against this technology is moral judgment. Let’s say that an AI-powered car is losing control on the road. There’s a minivan holding a family of five and a lone motorcyclist on that same road. The autonomous car will crash. The question then becomes will it clash with the family or with the loner? If someone ends up dead, who’s held accountable, the car manufacturer or the owner/driver? If we were to allow AI to make such life-and-death judgment calls, what guarantees that the outcome is the best one possible?

As frustrating as it is to deal with morality and ethics at times, these situations are important for our development as individuals and members of a community.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Ethical Tech section to stay informed and up-to-date with our daily articles.