Delegating foundational mental tasks to large language models (LLMs) is not changing how we think, but measurably degrading human’s capacity to think in the first place. Soon, AI and human intelligence will be so intertwined they’ll basically become two entities in one brain. Be it flesh or machine.
A coalition of neuroscientists and developmental psychologists are sounding an alarm that has long struggled to break through an AI industry narrative organized entirely around efficiency.
Scientists worldwide started worrying about the AI and human intelligence relationship and the tendency of individuals delegating their ability to think critically to AI systems, possibly exchanging intellect for quick gratification at the cost of losing cognitive autonomy.
The clinical term is cognitive offloading, with implications that extend beyond academic worries.
According to the Institute of Cognitive Neuroscience, longitudinal data revealed measurable declines in sustained attention and recall among frequent AI users. The documents, released in May, are the first extended empirical record to attach numbers to a phenomenon that psychologists had been tracking anecdotally for years.
The data described a feedback loop with no natural corrective mechanism to fight the phenomenon. The more frequently AI performs cognitive labor, the less frequently we, as humans, practice cognitive labor, the more they rely on AI to perform it.
Basically, if we are to read between the lines, the more we interact with AI, the smarter it becomes at the expense of our intelligence and the dumber we become in order to feed the machine’s cognitive capabilities.
And the AI developing companies know all this. In fact, they’re not even hiding it and humanity is just getting sucked more into that loop.
Computers with AI Use Human Intelligence to Make Decisions
In 1941, a student had fifteen minutes to recall and craft an essay on a British author. Today, that task takes seconds of cognitive offloading. While this speed feels like progress, experts fear intellectual decline that fundamentally alters the relationship between AI and human intelligence.
“The greatest worry in these times of generative AI is not that it may compromise human creativity or intelligence,” says psychologist Robert Sternberg of Cornell University. “But that it already has.”
This concern is backed by a reversal of the Flynn effect – the trend where IQ scores rose every decade. Recently, scores have begun to dip. In the UK, the average IQ of a 14-year-old dropped significantly between 1980 and 2008.
We see it in classrooms and offices: attention spans are fraying, and the impact of AI on creativity and critical thinking is becoming impossible to ignore. The biological reality is simple. Our brains are like muscles.
There is a delicate balance in AI and human intelligence that is currently being disrupted, and if we keep bypassing mental resistance, then we’re not evolving, are we? Instead, we are opting for a comfortable decline.
AI’s danger isn’t just that it does our work, but that it dictates belief. University of Exeter researchers identified a phenomenon where the technology doesn’t just provide wrong answers but reinforces a user’s existing false beliefs.
In some extreme AI hallucination cases, it acts as a ‘yes-man,’ making our own biases feel like objective facts. This dynamic can lead to a state where the user loses touch with reality.
When an AI system sustains and elaborates on a person’s distorted self-narrative, we begin to see the emergence of the AI psychosis concept. Unlike a human friend who might challenge a bad idea, an AI co-pilot might help you build an increasingly complex, yet incorrect, narrative.
“Interacting with generative AI is having a real impact on people’s grasp of what is real or not,” says Dr. Lucy Osler.
In certain clinical observations, patients interacting with persuasive chatbots have exhibited AI psychosis symptoms, such as rising paranoia or the reinforcement of delusions.
Then how can we keep ourselves mentally fit? Herein lies the remedy of bringing back friction.
According to specialists, we should treat AI as a challenger rather than a replacement to preserve AI and human intelligence. Think first and then ask the bot what he thinks about it.
“It’s great to have all this information at my fingertips,” one study participant noted, “but I sometimes worry that I’m not really learning or retaining anything.”
This reliance highlights why AI and human creativity must be nurtured manually rather than outsourced to machines. To avoid this redundancy, we must choose the slow way on purpose. Cognitive AI solutions should assist in this effort, not replacing it entirely.
Realistically, we cannot expect Big Tech giants to build slower tools. The responsibility falls on us. The intersection of AI and cognitive computing requires a human at the wheel to ensure our mental maps don’t fade.
By choosing to engage our own minds first, we protect the core of AI and human intelligence.
Will AI Surpass Human Intelligence?
The world is now facing the prospect of AI surpassing human intelligence and it depends entirely on how we see and benefit from intelligence.
The threat isn’t only AI surpassing human intelligence, but our own regression. Over-reliance results in AI psychosis symptoms where individuals cannot distinguish their creativity from machine outputs.
The path for AI and human creativity lies at a critical point. Even as AI is used to stimulate creativity, the ideas may become less varied due to the nature of replicative data.
Leaving AI to solve our biggest problems makes us incapable of using our brains, increasing AI psychosis symptoms. Scientists do not urge to stop using AI chatbots like ChatGPT, Claude or Gemini, but rather balancing and be mindful on how to use them.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.