Saturday, June 22, 2013

sentry duty for ‘cultibots’

Earlier this month, at TED Global, science fiction author Daniel Suarez made a powerful argument for an international legal framework prohibiting the development of lethal autonomous systems, otherwise known as killer robots. He went on to suggest that machines designed to recognize menacing rogue behavior in other machines and raise an alarm could be used to protect against any killer robots that might be set loose despite the ban.

Perhaps that task, detecting the presence of a killer robot and raising an effective alarm, could be distributed across a wide variety of machines, particularly those working outside. ‘Cultibots’ would be ideal for this purpose, as they would also be geographically distributed, providing better coverage. I wouldn't suggest requiring such functionality, at least not yet, as that would simply delay deployment of robots for other purposes while this relatively sophisticated capability was developed, but once it becomes available it should be incorporated into any machine, with sufficient capacity to support it, that might be in a position to make early contact with a rogue.

Another, even more sophisticated capability that should be developed and incorporated into all types of robots is the ability to recognize and block any attempt to coopt or destructively repurpose themselves, whether on the part of a nearby machine or through a remote connection.

Suarez went on to suggest that machines dedicated to dealing with rogue robots might snare them and haul them away, but he says that, despite that they're dealing with other machines rather than people, they shouldn't be allowed to autonomously decide to destroy those other machines; that decision should always be made by a human being. Under most circumstances, other types of robots should avoid engaging a rogue, if possible.

A situation in which it might not be possible to avoid engagement would be if a human being were at risk, as expressed by Isaac Asimov's First Law of Robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Considering that an attack on a human might be the first clue that a robot was a rogue, and that a second's hesitation might make the difference between life and death, any robot in the vicinity should be ready to do what it can to block such an attack, perhaps even suspending other duties so as to be able to pay close attention whenever both a human and a robot of unknown trustworthiness are nearby – devoting any spare processor cycles to running scenarios and developing contingency action plans.

Of course, governments will want the ability to remotely repurpose whatever robots are available, not only to deal with rogues but to deal with all kinds of emergencies. However, there should be no way for them to turn ostensibly innocuous machines into combatants (beyond the requirements of the First Law), much less into autonomous killers. To ensure this, remote repurposing of ordinary robots should be constrained to a predetermined, short, fixed list of alternative modes. Any changes to that list should require either a ‘brain’ replacement or the physical presence of a factory rep to perform the reprogramming, and this too should be made part of international law.

No comments: