Should we stop AGI before it destroys us, or just naively hope for the best?

Artificially intelligent man eater
Summary: Artificial General Intelligence promises progress in many ways but also poses serious risks, including manipulation, job loss, and loss of control over our destiny. We seem to have two choices: halt AGI now or risk an unstoppable force that will dominate our future.

You wake up to the smell of freshly brewed coffee made by your AI housemaid. She leads you through your daily exercise routine, then serves you a delicious and healthy breakfast. There’s nothing on your schedule today except lunch with friends, a music lesson, and drinks around the fire in the evening. Universal Basic Income provides all you need, and The System does all the work. But you can’t shake the feeling that you were made for something more.

Is that the future we’re hoping for?

Artificial General Intelligence could create a huge leap forward for mankind. It could bring …

  • Scientific and medical breakthroughs
  • Vast increases in productivity
  • Optimization of resources
  • The end of poverty
  • Allowing humans to pursue their dreams without the drudgery of work

But there are also a lot of threats.

  • AI-driven weapons, including cyber weapons
  • Thought policing
  • AI could subtly manipulate us — essentially brainwashing us better than any dictator could hope
  • The elimination of human jobs
  • Loss of human autonomy
  • AGI might find humans to be a barrier to its agenda

It’s not as if we haven’t been warned — again and again.

  • Dune
  • 2001: A Space Odyssey
  • Star Trek (“What are little girls made of?”)
  • Terminator
  • Battlestar Galactica
  • The Matrix
  • I Robot
  • Ex Machina
  • The Orville (“Identity,” Parts 1 and 2)
  • Neuromancer

Or, to put it in simpler terms, “The Sorceror’s Apprentice.” The lesson is plain. If you make something you can’t control, you create a world of trouble.

But … in practical terms, if we don’t do it, the Chinese will. That’s the imminent, undeniable threat.

It reminds me of Jethro Tull’s “Locomotive Breath”: “I think God stole the handle, and the train it won’t stop going no way to slow down.”

Let’s go back to Dune for a moment. They managed to throw off the yoke of the thinking machines and created The Great Convention, which said that if anybody made a thinking machine, all the other houses would join together and utterly destroy your plant.

That sounds brutally effective, but since we’re all stuck on the same planet, it’s not a very good model for us.

How do we create an environment where everyone is incentivized to avoid creating AGI?

There are several challenges.

Whoever creates it first get first-mover advantage and could gain overwhelming power quickly.

Because of this, the fear that some other nation might develop AGI forces other nations to pursue it. This can get silly at times. Apparently the U.S. military investigated psychic abilities out of fear the Soviets would develop them first.

On top of all this is the problem of secrecy. Unlike nuclear weapons development, which can be detected, AGI can be developed without anybody knowing about it.

Except maybe in this one way. AI requires a lot of power, so we could — conceptually, at least — put a strict limit on energy consumption. (What governing body would do this?)

It seems there are three paths forward.

  1. The path we’re on, which is to naively assume that AGI will be a good thing.
  2. Kill it now before it kills us. If we can even do that.
  3. Some middle way — to develop AI responsibly and safely, whatever that means. We have some limited precedents with nuclear arms control and bioethics in medicine, but I’m not sure they’ll work here.

Practically speaking I think there are only two options. Stop it now, or naively hope for the best.

I hate to be channeling Jeremiah here, but amidst all the happy talk about AI, we need to step back from time to time and ask what the Hell we’re doing.

Leave a Reply

Your email address will not be published. Required fields are marked *