Today: Covid-19; Tomorrow: AI Robots Determined to Destroy Mankind?
- Arjun C-M
- May 22, 2020
- 4 min read
If we were to believe everything Hollywood projects onto our screens…
Hal from 2001: A Space Odyssey, Terminator from the eponymous film, and the Mecha/Specialists in A.I. Artificial Intelligence are several of the many Hollywood portrayals of AI machines that are seemingly capable of wiping out humanity. So should we be scared? Well not really, at least not according to Melanie Mitchell in her recent book Artificial Intelligence: A Guide for Thinking Humans. In this post, I will address the skepticism surrounding Artificial Intelligence. But first, I would like to mention some of the notable achievements of AI machines over the years. In the 1970s and 1980s, many computer scientists speculated whether a nonhuman entity could ever beat a human in chess. This speculation is justifiable; chess is considered to be the game that brings out the highest order of human intelligence. However, in 1997, an IBM machine called Deep Blue dethroned the reigning world chess champion, Gary Kasparov, in a six-game contest. Fast-forward fourteen years and you probably remember the Jeopardy series where IBM Watson defeated Brad Rutter and Ken Jennings, the latter being the recent winner of Jeopardy’s Greatest of All Time competition. And more recently, a less familiar feat occurred. AlphaGo, a program developed by DeepMind technologies, took down Lee Sodol in a five-game match of Go. Go is the Eastern World counterpart of chess and many tout it as a greater intellectual challenge considering its 19*19 square game board and the extensive combination of moves that accompany it. While all three of these accomplishments are impressive and do represent progress in AI, they are often misconstrued by the media.
After each feat, newspapers were quick to broadcast headlines about how AI machines have reached or even surpassed human intelligence. This is misinformation. These machines are not intelligent in a humanlike sense. Because while DeepBlue is a chess champion, IBM Watson a Jeopardy champion, and AlphaGo a Go champion, these machines cannot perform any other tasks at even a beginner’s level. For example, take any other board game. DeepBlue and AlphaGo would not know where to begin if asked to play a game of Monopoly or Scrabble. In fact, AlphaGo would have difficulty playing chess, and DeepBlue would struggle with Go. If Jeopardy altered their format as to exclude the requirement of answering with a question, IBM Watson would be stumped. This is because all three machines along with most current AI products can only be programmed to function in very narrow domains (further discussed later on). If this statement holds to be true, how can we reach the conclusion that these computers are at a human intelligence level. They are actually extremely stupid and would be less useful than a three-year-old kid. And perhaps the most convincing point is that they are not even intelligent enough to recognize their own accomplishments. Now that is ironic!
But the question is not as simple as “When will AI machines reach human intelligence?” I would assert that a computer can never achieve this level. Human intelligence comes down to two main components. The first is learned experience. Learned experience refers to the learning process where we pick up on new ideas and then apply them to future situations. However, the science behind this everyday skill is quite complicated. It involves the complex neural system of our brains where neurons are processing pieces of information at a very fast rate and then organizing them into many different schemas. The human brain has over 80 billion neurons and this makes it difficult for scientists to pinpoint the mechanisms involved in learning. Most current AI systems are modeled on the brain’s neural networks. So if scientists are still unable to fully understand the processes that underpin learning, how can AI engineers create a machine that could replicate these systems.
Another issue is that AI systems cannot create analogies. A large part of human decision-making is based on analogies. Even if we are faced with an unfamiliar situation, we can use bits and pieces from other experiences to derive a solution. Let’s say that you’ve recently been gifted a ticket (Section 300, Row X, Seat 5) to a New England Patriots game at Gillette Stadium and it is your first time going there. You probably wouldn’t have a hard time finding your seat. Perhaps you’ve been to a hotel and thus could draw a comparison between a room in the 300s and a section in the 300s, both which are located on the third level of a building or stadium. Then, you would draw an analogy between the row chart and the alphabet. Because X is one of the last letters of the alphabet, you could safely presume that Row X would be far back in the section. Lastly, knowing the fact that people start counting 1,2,3…, you would know that your seat is probably five seats from the aisle. Now obviously most people are not consciously thinking about these steps when finding their seats. But that’s the point. We implicitly make use of analogies on an everyday basis such that it becomes second nature to us.
Computers do not have this same ability to generalize and form analogies between different scenarios. Most AI machines can only do the tasks they have been programmed for. Scientists are still figuring out ways to program features of learning and abstract thinking in computers, but more than likely they will not be able to crack this code in our lifetime or any lifetime. If AI engineers cannot replicate these basic cognitive functions, it is unlikely that machines will ever reach human intelligence, let alone achieve the superintelligence that makes them capable of destroying mankind.
Citations
Mitchell, Melanie. Artificial Intelligence: a Guide for Thinking Humans. Farrar, Straus, and Giroux, 2019.
Comments