We’re Taking the Wrong Road to Human-Like AI, Experts Say

Lading researchers warned that the pursuit of artificial general intelligence (AGI) by turning models to human-like AI may be misguide.

At the 2025 Association for the Advancement of Artificial Intelligence (AAAI) Presidential Panel on the Future of AI Research, leading researchers warned that the pursuit of artificial general intelligence (AGI) by turning models to human-like AI may be misguided, with public expectations distorting the progress of AI development.

The report, Future of AI Research, drawing on contributions from 475 AI researchers, found profound worry about the way hype is affecting both public opinion and research agendas.

MIT’s Rodney Brooks, computer scientist and chair of the panel’s AI Perception vs. Reality section, called Gartner Hype Cycle a long-standing guide, saying, “They’ve been using it for years,” further explaining the pattern of technologies peaking in popularity then crashing in expectations.

The study discovered 79% of AI researchers believe public opinion does not reflect current capabilities, and 90% state that mismatches are directly affecting how research is done.

“Large sections of public discourse are too accepting of hype,” said Brooks.

The researchers argued that public and media pressure have pushed AI labs and companies to prioritize flashy short-term wins over fundamental, long-term breakthroughs, claiming it hindered progress toward AGI – AI systems capable of learning, reasoning, and adapting like humans.

Scaling Current Models Will Not Suffice

The Future of AI Research report finds that scaling current larger and faster AI models won’t achieve AGI, with 76% of researchers surveyed believe that scaling current techniques – like large language models – will not be enough to achieve full human-like intelligence.

“AI was once limited to tasks where errors didn’t matter much, like product recommendations,” said Henry Kautz, chair of the section on Factuality & Trustworthiness, adding that “now it’s improving fast, but we’re not there yet.”

Kautz believes the next step in trust and reliability will come from “teams of AI agents that fact-check each other and keep each other honest.”

Despite warnings on human-like AI developments, researchers are hopeful. They demand responsible innovation, ethical oversight, and collaborative development – not bigger models. As the report says, “We aren’t going back to a world without AI. The only way forward is to build it better.”

AI is not yet the super intelligent power that some foresee, but with a shift of focus and more clearly defined goals, the route to AGI can be made more human-like AI grounded – and more optimistic.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.