I’ve noticed two perspectives on AI that you can hear in the background when people talk about the subject. Briefly, the two positions are AI as a tool vs. AI as a mind. And then, as is often the case, there’s a position in the middle.
The first group believes AI is a probabilistic symbol generator that mimics intelligence very effectively. It doesn’t “know” anything. It can’t make judgments. It just manipulates symbols based on probabilities. This perspective is similar to John Searle’s “Chinese room.”
The second group believes AI is (or is about to become) like a human mind. It does understand things. It has goals and motivations. It has (or can create) an agenda. This view of AI sees it as capable of making judgments. Commander Data would be a good illustration of this view.
There are very practical consequences of these two views.
Someone in the first group might give AI decision-making authority on certain tasks — like driving a car, or trading stocks — but would never give it decision-making authority on moral issues.
People in the second group might do just that. They might allow AI to decide when and where to drop a bomb, or they might give guns to AI cops.
You can see these two attitudes in the way Captain Kirk viewed M5 — a very advanced computer that only mimicked moral judgment — and the way The Next Generation crew viewed Data. Captain Picard believed in Data and allowed him to take command.
The mind model is the one that feeds conspiracy theories. If AI is just a very good symbol manipulator that can mimic thought, it’s not going to come up with its own agenda to enslave humanity. If AI is able to think and make judgments, it might do that.
My current attitude — which I hold to rather loosely — is that AI will never be more than a very good symbol manipulator. AI will never have a soul, if you want to think of it that way. It will become a better and better symbol manipulator, but it will never have its own agenda.
That leads to the middle position, which is that AI doesn’t think like we do and lacks true self-awareness, but it may become so skilled at mimicking thought and judgment that its behavior will be indistinguishable from genuine agency.
This is the conundrum that Starbuck faced with the Cylons in Battlestar Galactica. They were just machines (“toasters,” or “software”), but they were such good machines that it made you wonder.
Where does all this leave us?
As moral agents ourselves, we need to get a handle on the stakes. What if we’re wrong?
What if AI is just a sophisticated puppet, but we fear it has some secret, evil intent? We might unnecessarily restrain what it can do to help us out of a misguided fear.
Now flip the script. What if it’s actually becoming a sovereign mind, and we keep treating it like a glorified calculator? It might secretly manipulate us to do its bidding, to our own demise. Think of Alicia Vikander’s character (Ava) in Ex Machina.
I’m mostly persuaded by David Chalmer’s argument that if AI can become conscious and have first-person experience, we have to consider the possibility that we ourselves are nothing more than a simulation. I solve Chalmer’s puzzle by rejecting the premise. I don’t believe AI will ever have anything analogous to a human soul.
But even if I’m right — even if AI will never have first-person experience — I still believe we’re going to have to grant AI certain “human” rights.
Here’s why. If you have some thing that looks and speaks and acts like a human — and we will have such things before too long — but you justify treating it cruelly because it lacks a soul or true consciousness, then you become the monster. Think of when Starbuck interrogated Leoben. She was very cruel.
There is the danger of over-anthropomorphizing AI and letting sentimentality cloud our judgment. But I think our history tells us that we’re more likely to err when we justify cruelty than when we extend too much compassion. In other words, the downside of treating a really good calculator as if it has first-person experience is less than the downside of treating something with first-person experience as a tool.
The bottom line is that I’m not worried about the robot’s feelings. I’m worried about what we become if we mistreat the robots.
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow