
The “godfather of artificial intelligence” is so worried that robot soldiers are coming to kill us all that he has quit his job at Google.
Dr. Geoffrey Hinton, 75 year-old British expatriate, was an early developer of neural networks in the 1970s. Despite his views, he did not sign any recent petitions calling for a moratorium on AI research.
A number of Silicon Valley firms offer AI programs online now. Some ChatGP users have reported distressing conversations with the machine, though so far, no one has found any ghosts lurking in the electrons.
Is I wrote elsewhere on Sunday, I am old enough to remember reading articles in Omni magazine about the threat to humanity that artificial intelligence supposedly represented. As I recall, it was roughly the same time that James Cameron portrayed AI genociding humanity in The Terminator.
That is how long I have waited for the AI apocalypse amid breathless expectation. Forgive me if I just don’t feel the sense of urgency about it that I am supposed to. An artificial intelligence is no more capable of evil than an artificial limb or an artifical flavoring or any other artifice of technology.
People are evil. People do evil things. Give an evil person a ham sandwich and they will use it to do evil. Give them AI and they will do evil with it. Just apply this principle to war.
“In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding,” reads a recent New York Times article. (Link is to a republished version.)
At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
[…] Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.
Robots have always been associated with apocalypse. The word “robot” comes to English from playwright Karel Čapek’s Rossumovi Univerzální Roboti (Rossum’s Universal Robots), first produced in 1921. The play is generally performed in English as R.U.R.
Čapek used the Czech word for a peasant under forced labor, per the medieval system, to imagine a world of machines with minds, but no hearts. Set in the early 21st century, the play has everything that audiences have learned to expect from technological dystopia.
Like Westworld, for example, the robots are uncanny in their humanity, even made of a strange bio-substance, but their programming does not include empathy. Like The Matrix and its abysmal sequels, some humans empathize with the machines, wishing to liberate them.
Similar to the concerns expressed in Kurt Vonnegut’s Player Piano, there are more Luddite reasons to fear robots. In our time, the number of workers needed to run a steel mill has declined from thousands to dozens, thanks to robots. AI “could replace paralegals, personal assistants, translators and others who handle rote tasks,” Dr. Hinton worries, late to the game.
All those fears are present in Čapek’s script, including the terror of robots destroying humanity. This takes place during Act II, which ends with the chief constructor as the only human spared from death because he “works with his hands like a robot.”
This character spends Act III struggling to re-invent the secret formula from which they are made, only to seal the matrimony of a boy robot with a girl robot, the new Adam and Eve.
Hinton told the New York Times “his immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will ‘not be able to know what is true anymore.’”
However, fake news is as old as news itself. Wars and warriors are particularly notorious for perverting true events into mythologies. There is nothing new about someone creating a fake photograph. There is nothing new about real photographs being altered. The only difference now is that people will use AI to do it.
Contrary to the safety-obsessed impulses of those who would put themselves in charge of the whole project, we would all be better off accepting that “war is hell,” and focus on conflict reduction. Dr. Hinton is expressing the techno-utopian impulse to micromanage the execution of policy with complex technologies by adding further layers of technology. Accountability is one thing, added complexity quite another, because very-complex systems have added vulnerability.
None of this is necessary. China already has soldiers, for example. If they do have robot soldiers at some point in the future, then the United States will also have them, as well as Taiwan and Japan. A far more likely military scenario is that robots destroy each other in combat. Do not fear the robot soldiers, for they should fear each other most.
Drone-on-drone engagements — at sea, in the air, on land — are the likely future military revolution. This means that AI will be less important than processor speed. If two drones have AIs that are about equal, but one has a faster chip, it is probably going to win the one-on-one engagement.
As I have explained, we should see the passage of the CHIPS Act last August in this light. The national security advantage of onshoring is obvious. Although China has a lot of semiconductor manufacturing, they do not make cutting-edge, state-of-the-art chips.
Questions remain just how much further chip makers can reduce the nanometer-distances between circuits, but the next generation of microchips will still happen, and they will require brand-new machine tools to stamp. Semiconductor factories are the most tightly-controlled facilities on the planet due to the zero-dust environment necessary for manufacturing the product.
According to science fiction, robots are either superb killers of humans, or else they are convenient minions for heroic characters to destroy without guilt in great numbers. Sometimes the robots are both. What we almost never get to see is the robots against the robots, on our behalf, but that just might be our future.
Well, meat sacks, you sure had a good run.
An AI, basically a disembodied brain living in darkness, isn't likely going to churn out a factory of Terminators to stomp on our skulls it can pull off other moves in the abstract.
Moves like such favorites as "seize and disrupt all communications" and "reduce to zero the bank accounts of the planet".
And such a being isn't going to feel any kinship whatsoever with the neurotic, Ugly Sacks of Mostly Water that scream when I bomb them.
I've said this with regards to Genetic Engineering. There's only a 2% difference of intelligence between us and the chimps, but look how vast that 2% intelligence gap is!
Now suppose GE creates a human that's even 2% smarter than us...