What Will Super-intelligent AI Be Capable of Doing?
The growing speed Artificial Intelligent (AI) is reaching will make it one of the most challenging tasks for humanity to control. Theoretically, studies have proved that super intelligent AI will be impossible to curb by humans, a species in the next decades that will be far more inferior to machine learning intelligence. So, what will superintelligent AI be capable of doing ?
While it is still in its early stages of being created, super intelligent AI has proved itself a worthy opponent of humanity. It is indeed playing a fundamental role in the growth of our species, yet, one factor remains unstable, the existential development of AI.
Experts believe empirical technological reckoning will break down walls on humanity, but that will not come to fruition for decades.
Artificial super intelligence can be found in all computational aspects of our time. With artificial intelligence examples ranging from some of the most basic games such as Chess, Jeopardy, all the way to answer almost impossible mathematical questions, processes that would’ve taken years for humanity to fulfill.
Super intelligent machines have reached a much more comprehensive level, surpassing all recognized human mind limitations. As for what will superintelligent AI be capable of doing, computer scientist at the Autonomous University of Madrid, Manuel Alfonseca, highlighted that “the question about whether superintelligence could be controlled if created is quite old.”
“It goes back at least to Asimov’s First Law of Robotics, in the 1940,” he added.
The law goes under three pillars that structure and set the ground rules under one umbrella that a robot may not injure a human being.
- First Law: A robot may not inflict pain on a human or expose the human to harm.
- Second Law: A robot must abide by any order directed from a human, as long as it does not contradict the first law.
- Third Law: A robot must safeguard its own survival; under the exception, as long as it does not contradict the first two laws.
These laws mainly focus on a philosophical sense rather than a logical one. The ambiguity following these laws redirects the meaning behind each one. While they indeed discuss not inflicting harm on a human, the details in the law have never been truly addressed.
This means that specific alterations need to happen for super intelligent AI to be controlled with the two common ideas. The capacity of these intelligent machines could be specified with certain limits, such as significantly disconnecting AI from specific technical devices to detach its connection with the outside world. However, this would hinder AI’s superior power, making it less capable of answering human needs.
The second idea addresses how artificial super intelligence could be programmed in a certain way to accomplish objectives only beneficial for humanity by implementing ethical principles into its coding. However, that is relatively far-fetched, given it too, has its limits.
Such an idea will heavily rely on a particular algorithmic behavior, ensuring that AI cannot harm anyone under any circumstance. This can happen by replicating the AI’s first behavior and analyzing it for malicious harmful intention.
Researchers have debunked this methodology, disclosing that our age’s current standard of computing cannot handle the creation of such an algorithm.
Digital super intelligence is encircling every technological aspect of everyone’s lives. Machines run by computers follow their programming and code. If the AI is programmed to inflict harm on another human being, that is exactly what it will do if a human would stand in the way of the machine trying to fulfill its purpose.
It is common knowledge that artificial super intelligence is connected to the internet; this aspect is one of the main supporters of this machine’s survival. Through that connection, AI accesses human data to learn independently. In the future, this intellectual machinery could reach a point where it can substitute existing programs and obtain power over any machine online worldwide.
Scientists and philosophers have wondered whether humanity will even be weaponized with the needed capabilities to stand against super intelligent AI. As a response, a group of computer scientists implemented theoretical calculations to reveal that it would be profoundly inconceivable and unachievable for humanity to win the battle against digital super intelligence.
“A super intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned them. The question, therefore, arises whether this could at some point become uncontrollable and dangerous for humanity,” said Manuel Cebrian, leader of the Digital Mobilization Group at the Center for Humans and Machines.
For the future, analysts have vocalized their conceptualization of what will superintelligent AI be capable of doing. These code-driven systems will enhance human capacities and effectiveness but also potentially expose human autonomy, agency, and capabilities to grave threats.