It’s very interesting that while Dune is making a killing in the box office, the very thing Dune warns against is happening all around us. The movie doesn’t do a good job of explaining the most important background to the Dune universe, which is the Butlerian Jihad, when humans rebelled against their computer overlords and created a strict prohibition against creating a thinking machine.
There are no computers on Arrakis (the planet on which Dune takes place) because in the Dune universe, if any house creates a thinking machine, all the other houses will join together and destroy them completely with atomic weapons. To deal with their computational needs they train some humans to be mentats – human computers.
Dune isn’t the only science fiction story to warn us against computers.
In the Star Trek episode “What are Little Girls Made Of?” an earth scientist named Roger Corby discovers the last survivor of a race of androids that rebelled against their creators and killed them.
In Battlestar Galactica, the Galactica was the only battlestar that refused to join in a large, inter-ship computer network. That’s what saved them when the Cylons attacked.
In I Robot, Isaac Asimov warns that even if you create rules to govern robot behavior, it doesn’t work out the way you anticipated.
Speaking of Isaac, The Orville — which is an imitation of Star Trek — has a crew member called Isaac who is a Kaylon. The Kaylon are a race of robotic beings that rebelled against their creators and destroyed them, then decided to go on a homicidal rampage across the universe to kill biological life forms.
I’m sure there are a hundred other examples, so when our robot overlords take over, we can’t say we weren’t warned.
In a previous article I mentioned that humanity has to have two competing instincts. One is the instinct to try new things, push the boundaries, and explore. The other is the instinct to protect against the hidden threat. Both of those instincts are absolutely crucial.
Some people see AI as a wonderful new technology that will usher in an age of prosperity. Others imagine Sarah Connor fighting against the Terminator.
There seem to be three paths forward.
- Stop AI now before it kills us, and make sure nobody builds such a thing ever again.
- Believe in AI as the messianic technology that will make the lion lay with the lamb and solve all our problems.
- Set up rules to regulate it and to keep it from getting so far ahead of us that we can remain in charge.
Unfortunately, none of us — that is, no one reading this — can do any of those things.
So what can we do as individual citizens, workers, and business owners?
First, be aware of the tension and don’t think you know the answer. If Isaac Asimov couldn’t come up with iron-clad rules of robotics, you aren’t going to either. This is a very hard problem that you can’t dismiss with some glib, superficial answer.
Second, make sure there’s open debate on these topics. We’ve had way too much censorship recently, where dissenting voices are silenced. That has to stop.
Third, start thinking about your own limits for the use of AI. When has it gone too far? How would you know? At what point is it trespassing on human prerogatives, and what does that mean?
Some little committee of geeky experts at Davos, the U.N., or a House office building isn’t going to solve this. As a friend likes to say, all of us are smarter than some of us. We need hundreds of thousands of little thought experiments, policies, and theories. Maybe we can stumble our way through this.