Sci-fi fans have likely heard of Isaac Asimov’s Three Laws of Robotics from his 1942 short story “Runaround”:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Given the tech strides made over the last 70 years, it’s not surprising that those rules are no longer feasible. Some experts say that it is almost impossible to control superintelligent AI—that no containment algorithm will be able to simulate the AI’s behavior and predict with absolute certainty whether its actions might lead to harm.
One of the experts that shares that point of view is Roman Yampolskiy, Associate Professor in the Computer Science and Engineering department at the University of Louisville, and founder and current director of the Cyber Security Lab. We sat down with Dr. Yampolskiy to talk about his concerns surrounding superintelligent AI.
What are your main concerns with superintelligent AI?
Yampolskiy: Major corporations and governments are all trying to create human-level artificial intelligence or something even beyond artificial intelligence, known as Superintelligence. But we don’t really know how to control such systems. How do you make sure they do what you want? Humans want AI to do good. But what does it really mean for a solution to be good? We have to formalize ethical notions before we even develop such systems.
Can you give us an example of advanced AI gone wrong?
Yampolskiy: Let’s say you create a medical system and you tell it, ‘I don’t want there to be any people with cancer.’ The advanced AI could interpret the solution as being to kill all people with cancer. Then, there would be no one with cancer, right? It’s obvious that’s not what you had in mind. It’s not obvious to an algorithm.
Are there dangers with existing AI systems?
Yampolskiy: Computer software is directly or indirectly responsible for controlling many important aspects of our lives already. Wall Street trading, nuclear power plants, social security compensations, credit histories, and traffic lights are all software controlled. But they’re only one serious design flaw away from creating disastrous consequences for millions of people.
The situation is even more dangerous with software specifically designed for malicious purposes such as viruses, spyware, Trojan horses, worms, and Hazardous Software (HS). HS is capable of direct harm and can sabotage legitimate computer software employed in critical systems. If HS is ever given capabilities of truly artificially intelligent systems, the consequences would be unquestionably disastrous. If we create machines that are smarter than us, we can’t predict what they will do.
What is the solution?
Yampolskiy: You’re assuming there is a solution. The problem may be unsolvable. The problem is that we keep making technology that we can’t control. We have already created systems that are capable of problem solving in specific domains, like for playing chess or driving cars. But there is danger with those. Self-driving cars can be hacked. They have very weak security and you can do all sorts of things. If someone were to hack multiple cars, you could have a terrorist attack with those self-driving vehicles involved. You can kidnap people, you can crash their cars to kill them. Lots and lots of things to be worried about.
Those are examples of narrow AI—designed to do one thing well. Future systems will solve problems in many domains. How do you control something like that? It’s the same as asking how you can control another human being.