Lethal robots are something of a fascination for me. And more specifically, the point in time at which unmanned aerial vehicles, which the military prefers I not call drones, reach a sufficient level of autonomy that they can operate largely without the direction or influence of a human being. I wrote about the point when drones might reach that threshold in a paper for the Hoover Institution last year.
There are benefits to autonomous drones, and autonomy may become an essential characteristic for the military’s robot planes. (I think it might already be.) But there’s an obvious anxiety here: What happens when, or if, our robot servants become our overlords, turning their free-thinking brains towards the matter of our subjugation and ultimate destruction.
I’m heartened to learn that no less an esteemed institution than Cambridge University is concerned about this dilemma, and a host of other “extinction-level risks to our species as a whole” that might be posed by advanced human technologies.
The Cambridge Project for Existential Risk was founded late last year by Jaan Tallinn, a Skype co-founder, and two Cambridge professors, Huw Price and Martin Rees. They aim to set up a “multidisciplinary research center dedicated to the study and mitigation of risks of this kind.” Put another way, I suppose that means defending us from killer drones. The founders report they’re looking for funding. John Brennan, maybe you want to chip in.
The project has attracted a host of advisers from academia, think tanks, and the world of venture capital. And no less than Stephen Hawking, who has been greatly aided by advanced technology, has joined forces.
I’ll be eager to learn what else the project has in store. More to come.