Man and machine: What AI is and what it is not
Intelligence is a hard thing to define. You have street smarts, book smarts, and the ever-present colloquial term emotional intelligence, and of course the dreaded intelligence quotient or IQ.
However, we must place a marker somewhere to establish what we are talking about. For the sake of not exceeding a word count of ten thousand and angering the almighty algorithm, we will define intelligence as the following:
The ability to gather information and use said information to solve a given problem. In other words, learning and problem solving.
What AI is: The great optimizer of lesser tasks
First, to clarify two things.
One, that in this article we are talking about artificial intelligence as it exists today, not the singularity that will transform the way we live – although modern AI most definitely has.
Two, AI is not a single entity. More like a collection of data sets each with very specific tasks. Think of object/image recognition for automated vehicles, limb movement for robots, adaptation for machine learning, speech recognition, and natural language processing.
Essentially, today’s AI is made to automate lesser tasks that most humans would just rather not, or tasks that would help a human specialist speed things up and cut corners quicker. Most applications of artificial intelligence can barely be clarified as intelligent, rather as hyper-efficient without coffee breaks.
What AI is not: Smart
Taking an example from a Ted Talk by Janelle Shane, an optics research scientist and artificial intelligence researcher, the AI in question was given a set of instructions and a goal.
In a 3D simulation, the AI was told to get from point A to point B “by using its legs, facing forward, and not using its arms.” The result was the AI’s simulated human body model launched itself forward, flailing its arms in the wildly (not using them at all), landing on its face, doing a roll, and repeating the process until it reached point B.
In another simulation, this time with a 2D object, the objective was the same. This time, instead of using its legs to walk, it disassembled its body, stacked its parts into a tower, toppled itself over, and landed exactly on point B.
Again, quite an optimal solution, but not what anyone had in mind.
Perhaps the AI thought that using the legs the way we mere humans do was subpar and could be optimized without taking the frailty of our bodies or our mortality into account.
This is why self-driving cars are hard to implement. Sensory technology is one thing, but what to do with the information is a whole other issue, and the most advanced AI that exists today can do very little – actually nothing practical – without a robust supporting network infrastructure surrounding it.
If told to drive to a spot across the mountain, the last thing anyone wants is for the car to go off road and drive off a cliff because the pre-coded condition of “Don’t kill the passenger” was left unspecified.
Nothing is obvious to an AI, and that is what makes humans smarter.
For all the doomsayers out there, the most likely scenario where AI is the villain and we are the victims is a self-inflicted, weaponized AI that will do exactly what it is intended. Unless Elon Musk’s worst nightmare comes true in a universe where artificial intelligence is the new singularity that overtakes and out-lives us all.