The Danger of Letting AI Grow Unchecked
The potential of AI is infatuating, it’s attracting the brightest minds to solve its greatest problems, but are they overlooking the obvious?
.This is part 1 of a 2 part piece on Jaan Tallinn’s talk on Artificial Intelligence. It has been edited and condensed. Jaan a programmer, investor, and physicist. He co-founded Skype and Kazaa, as well as Centre for the Study of Existential Risk and Future of Life Institute.
Imagine a giant spaceship — a ship that’s large enough to carry every single human on planet Earth. Even though the construction is far from finished, the boarding has begun! With the children mostly seated and ready to go. With great excitement the Earth is enveloped with a buzz about the upcoming trip.
But something’s amiss. The spaceship is consuming billions of dollars and millions of man-hours each year, yet there’s a small group of people desperately trying to point out that the project has a problem. It’s missing a steering mechanism!
Naturally, they are not heard because of all the noise of the construction. Engineers point out that working on the steering wheel of such a powerful spaceship is a complete waste of time.
There are numerous factions dismissing the group of — “too paranoid!”. Those factions can be roughly divided into 4:
1. This thing will never take off anyway, at least not in the next 300 years
2. Steering is trivial, with enough powerful engineers the spaceship will auto-steer itself
3. It’s pointless wasting precious resources on trying to control a spaceship this massive
4. It’s more important to get it off the ground, rather than worry about backward and selfish issues such as payload safety
Naturally with such a sensitive topic, many of the top engineers feel massive pressure to ignore this group of dissidents. Signalling sympathy with them may suggest that they are actually right. This may result in a diversion of funding and engineering hours from their part of the main engine!
Although contradicting in many ways, the factions have 2 things in common. 1) they are making excuses why nothing needs to change, and 2) they are trying to hide a grossly embarrassing blunder: the architects and engineers simply forgot to design for a steering mechanism.
This is a metaphor for Artificial Intelligence research, that Jaan Tallinn used at his talk. His story is a reflection of what the state of AI research and development was, only just a few years ago.
Jaan enthusiastically continues:
Metaphors are to be taken with a grain of salt, but this one is particularly interesting, particularly illuminating. Let me recount the similarities:
First: the moment when AI exceeds human level intelligence is often referred to as “takeoff” . People often debate whether the takeoff is going to be “hard” (in other words, too quick for society to react) or “soft”.
Second: the takeoff is going to affect everyone — but especially children, because they are “closer to the future”. The potential impact of AI takeoff is often compared to those of the agricultural and industrial revolutions. I’d go a step further, and analogize it to the invention of brains by evolution.
Third: getting a rocket to take off smoothly is hard. Designing a robust AI take off, is also hard . There are hard engineering problems, quoting AI-risk researcher Eliezer Yudkowsky: “Aligning superhuman AI is hard to solve for the same reason [as] a successful rocket launch is mostly about having the rocket not explode, rather than the hard part being assembling enough fuel.”.
.Fourth, it looks likely — or at least plausible — that we’ll only get once chance to get this right. If the takeoff catches us unprepared, the result might be a disaster of cosmic proportions. Eliezer Yudkowsky again: “If you want a picture to symbolize what we’re worried about, don’t imagine a picture of a Terminator robot with glowing red eyes; imagine a picture of the Milky Way with a 30,000-light year-diameter sphere gapped out of it, centered on Earth’s former position.”. Yes, thats how large of a disaster this would be.
Fifth, for decades now, billions of dollars and man hours have been poured into creating ever more powerful metaphorical engines. To the point where AI is now smarter than humans in many domains. Yet the budget and talent that humanity has spent on the steering mechanism — that is, making AIs more predictable and controllable — can be rounded to zero. Humanity spends more on anti-tobacco advertising, than this.
Sixth, if you asked AI researchers just a few years ago about the control problem, you would have got all these conflicting answers — it felt like this line in a Dire Straits song: “two man say they’re jesus.. one of them must be wrong.”
Seventh, there was a lot of peer pressure to stay mum about the AI control issue. Although this has improved, unfortunately, it still exists. Attend a panel discussion on AI and you can observe the very real unease they experience sitting next to each other — it takes real effort to acknowledge the issue when their colleagues are around.
Anecdotally, two AI researchers walk into a panel discussion: Both very concerned about the AI control issue; both equally surprised to find each other at an AI risk conference. Prior to this “coming out of the closet” moment, the supervisor-student relationship they had spanned over 9 years.
If you are still unsure of the magnitude of the task at hand, allow me to quote a few notables:
“Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them /…/ it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers/…/At some stage, therefore, we should have to expect the machines to take control”.
– Alan Turing, widely considered father of computer science said this over 65 years ago.
Here’s one:
“If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.”
– The father of cybernetics, Norbert Wiener back in 1960.
Lastly, and probably the most widely used AI quote:
“…the first ultraintelligent machine is the last invention that we ever need make.”.
– Turing’s good friend I.J. Good coined the term intelligence explosion. We just might be vulnerable to it.
In conclusion, indeed the entire field simply forgot. But it’s not all doom and gloom. Having painted such a depressing picture, let me change my paint colors for some of the most recent positive developments.
Part 2 will explore these positive developments.