19 experts from neuroscience, philosophy, and computer science have produced a comprehensive checklist of criteria based on human consciousness studies, measuring the likelihood of an AI system attaining sentience.
- The report emphasizes the importance of delving deeper into this topic due to the potential consequences of AI achieving consciousness.
- The checklist enables researchers to evaluate AI systems against consciousness indicators, providing insights into their potential for consciousness.
A collaborative effort by 19 experts from various fields has yielded a checklist of criteria based on human consciousness studies that could potentially indicate if an AI system has a likelihood of gaining sentience.
The report, titled “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness,” addresses the growing interest in AI sentience and its potential ethical implications.
Recent AI progress and discussions among industry leaders from various fields including neuroscience, philosophy, and computer science have nudged the concept of AI sentience from fiction toward reality. While many researchers assert that the current intelligent systems are not yet conscious, they acknowledge the need to explore the question further, particularly considering the implications of AI attaining consciousness. Beat it to the punch, so to speak.
To that end, the researchers assembled a checklist derived from six neuroscience-based theories of consciousness. These theories ranged from recurrent processing and global workspace theories to higher-order theories and attention schema theories. These studies provided the basis for defining and evaluating consciousness indicators. So, by examining an AI system against these indicators, the researchers aimed to gauge the likelihood of artificial intelligence sentience.
The labeling of an entity as “conscious” holds moral implications, influencing how most humans perceive and treat that entity. According to the study, there isn’t enough effort for AI system Assessment despite all the talking among leading AI labs.
The team’s theory-heavy approach has garnered attention from experts in the field. The computational functionalism perspective, positing that the right computations are necessary for consciousness, serves as a foundation for this study.
The report stresses that the checklist is a starting point and acknowledges the need for further refinement. The collaboration wrote, “We also recommend urgent consideration of the moral and social risks of building conscious AI systems, a topic which we do not address in this report. The evidence we consider suggests that, if computational functionalism is true, conscious AI systems could realistically be built in the near term.”
The checklist provided by this study represents a step towards systematically assessing and addressing critical questions about AI consciousness.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.