Safe AI

The Economist:

These parallels should comfort the fearful; they also suggest concrete ways for societies to develop AI safely. Just as armies need civilian oversight, markets are regulated and bureaucracies must be transparent and accountable, so AI systems must be open to scrutiny. Because systems designers cannot foresee every set of circumstances, there must also be an off-switch. These constraints can be put in place without compromising progress. From the nuclear bomb to traffic rules, mankind has used technical ingenuity and legal strictures to constrain other powerful innovations.

Just as our governments do what they want because civilian oversight is lacking. Have a look at some of the technologies governments are using to bulk collect phone records: StingRay, Dirtbox, ARGUS-IS. Today, on May 7, 2015, a federal appeals court ruled the National Security Agency’s bulk collection of US citizen’s phone records is not authorized by the Patriot Act and therefore illegal. Will the NSA appeal? There is no doubt. Armies — and their private contractors — definitely need oversight (October 2014):

A federal jury in Washington convicted four Blackwater Worldwide guards Wednesday in the fatal shooting of 14 unarmed Iraqis, seven years after the American security contractors fired machine guns and grenades into a Baghdad traffic circle in one of the most ignominious chapters of the Iraq war.

Just as regulated markets help monopolies thrive. Just as our government scratching the backs of giant corporate enterprises deemed too large to fail. Just as bureaucracies are opaque — or to be more precise so complex that it is completely dark — and accountable to no one. Consider the recent news where Goldman Sachs is considering closing its dark pool called Sigma X. What’s the big deal? Marcus Baram, International Business Times:

The off-exchange platforms that let traders buy and sell stock anonymously, under the radar of the rest of the market, have quietly surged in practice by large institutional funds and pension funds. Such dark pools, which are now responsible for about 12 to 15 percent of all trades in the U.S., have aroused the concerns of regulators because some sophisticated players such as high-frequency traders are able to unfairly exploit the system.

To entertain the possibility that regulators — not all regulators, but least some who have the power to do something about it — have not known about dark pools and how large institutions have exploited them to rip off the investing public, is ludicrous. Not sure? Robert Lenzner, Forbes (April 2014):

In today’s offering Salmon blew me away with his exclusive discovery that the SEC secretly promised Goldman Sachs NOT to prosecute any other notorious ripoffs of the innocent investor in mortgage back securities other than Abacus, the heretofore biggest Goldman black eye coming out of the melt-up before the meltdown.

How safe will AI systems be? Consider who will eventually develop and control them: monopolistic corporate giants driven by the thirst of more and more and more money, and even beastlier governments hell bent on growing more tentacles to control their citizens, and coincidentally these beasts love to massage the backs of giants.

We are in the very early days of artificial intelligence research and similar to many other industries there are many companies competing. And just like other industries the AI industry will eventually mature and the larger more powerful companies will swallow the smaller ones. Eventually there will be one, two, maybe three very large monopolistic enterprises controlling the AI industry. But the biggest and most powerful will be the AIs our governments wield.

We dropped two atomic bombs. It was only then we realized we should not use atomic bombs, let alone more powerful nuclear bombs. These bombs are powerful innovations that were constrained only when we saw the immediate and pervasive destruction of millions of lives. Yet we continue to play with fire. Even after multiple nuclear disasters we continue to build nuclear reactors, bury spent radioactive rods, and think nothing bad will happen. Here is a partial list of nuclear incidents and disasters, according to Wikipedia:

  • 2011 Fukushima Daiichi
  • 2001 Instituto Oncologico Nacional
  • 1996 San Juan de Dios
  • 1990 Clinic of Zaragoza
  • 1987 Goiânia
  • 1986 Chernobyl
  • 1979 Three Mile Island
  • 1969 Lucens
  • 1962 Thor

There are many more. Given our historical performance record of constraining powerful innovations by using technical ingenuity and legal strictures, it is no wonder we are worried about artificial intelligence. Bill Gates is worried:

I am in the camp that is concerned about super intelligence. First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern. […] I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Elon Musk is concerned:

I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure we don’t do something very foolish.

Stephen Hawking is concerned, too:

The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.

Bill Gates, Elon Musk, and Stephen Hawking. They are not dummies.