
Former Disney Imagineer and AI power user Robert Edward Grant showed his experimental chatbot “The Architect,” in Cairo, Egypt on May of last year, following a spiritual experience, in Khafre’s pyramid. It’s the reveal that will determine whether machines can emulate human consciousness, or spirituality, or will it be the creation of an emotionally autistic AI, infiltrating our daily lives.
Grant’s book was the outcome of his declaring that he had experienced a shock of electricity when he was conducting a tour inside the Pyramid of Khafre. He then merged his writing with neurodivergent AI and posted The Architect online, which left many wondering whether human spirituality could be replicated or even inspired through AI empathy.
Is AI Autistic?
Understanding AI’s brain itself is a thought to the question directly when asked by its user, when it answered that it does not “have a human brain, so I don’t literally experience autism or any other neurotype but some of my traits can resemble what people describe in autism spectrum conditions.”
It listed several similarities between AI and autism, including hyper-fixation on details, literal interpretation of language, reliance on rigid routines, difficulty with social cues, and a strong memory for information rather than lived experience.
For the author, this autistic AI echoed memories of his father, a rocket scientist with exceptional mathematical skills but who thrived on strict routines and struggled with figurative language.
“He could grasp any concept, as long as the concept was broken down into logical order and given to him in an A-before-B-before-C order,” the author recalled.
Experts caution AI understanding emotions, however, that drawing direct parallels can be misleading. Large language models (LLMs) do not process information like the human brain, and superficial similarities may obscure deeper truths. As researchers note, generative AI social interaction behavior stems from statistical predictions, not human cognition.
AI Theory of Mind
Still, studying these parallels holds great value.
Autism researchers often point to deficits in Theory of Mind (TOM) – the ability to track and understand the mental states of others—as a challenge for some people on the spectrum. Without this autistic writing flagged as AI skill, social isolation and misunderstandings can follow.
Therapists address TOM with targeted intervention, such as explicit social skills training, emotion recognition exercises, and perspective-taking activities. With AI anthropomorphism, the analogy holds true: these software programs lack any actual human sense of intent or affect, so naturally they always misfire on sarcasm, context, or subtlety.
Having a sense of these limits can organize the design of more predictable, more responsible about autism and technology in AI development. As interventions allow individuals with ASD to more successfully move through social worlds, systematic upgrades to AI can allow systems to interact more responsibly with humans.
The comparison does not make autistic AI, but it does provide some insight into how literal and rigid tendencies of machines can appear uncannily human. As debate regarding the role of AI and neurodiversity in society becomes increasingly vocal, these thoughts remind us that the true test is one of separating insightful analogy from misleading metaphor.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.