We cant ban killer robots its already too late | Philip Ball

George Lucas Offered Advice To SOLO: A STAR WARS STORY Director

George Lucas Offered Advice To SOLO: A STAR WARS STORY Director

February 9, 2018 - Monkeys Fighting Robots (blog)

Lucas sold his company to Disney in 2012 for over $4 billion. Since then he has watched Disney make three films that broke the $1 billion mark at the box office which surely gave him a bit of an itch to get back to the franchise. Hopefully, Lucas was

There's A (Ro)Bot For That

There's A (Ro)Bot For That

January 13, 2018 - PYMNTS.com

On the one hand, we love robots. When we think about our happier futures, friendly, helpful robots are often envisioned beeping and bopping in the background, quietly improving our lives. Rosie from “The Jetsons,” the “Danger, Will Robinson!” bot...

Shares 0

Telling international arms traders they cant make killer robots is like telling soft-drinks makers that they cant make orangeade, says science writer Philip Ball

One response to the call by experts in robotics and artificial intelligence for an ban on killer robots (lethal autonomous weapons systems or Laws in the language of international treaties) is to say: shouldnt you have thought about that sooner?

Figures such as Teslas CEO, Elon Musk, are among the 116 specialists calling for the ban. We do not have long to act, they say. Once this Pandoras box is opened, it will be hard to close. But such systems are arguably already here, such as the unmanned combat air vehicle Taranis developed by BAE and others, or the autonomous SGR-A1 sentry gun made by Samsung and deployed along the South Korean border. Autonomous tanks are in the works, while human control of lethal drones is becoming just a matter of degree.

Yet killer robots have been with us in spirit for as long as robots themselves. Karel apeks 1920 play RUR (Rossums Universal Robots) gave us the word (meaning labourer in Czech). His humanoid robots, made by the eponymous company for industrial work, rebel and slaughter the human race. Theyve been doing it ever since, from Cybermen to the Terminator. Robot narratives rarely end well.

Its hard even to think about the issues raised by Musk and his co-signatories without a robot apocalypse looming in the background. Even if the end of humanity isnt at stake, we just know that one of these machines is going to malfunction with the messy consequences of Omni Consumer Products police droid in Robocop.

Such allusions could seem to make light of a deadly serious subject. OK, so a robot Armageddon might not be exactly frivolous, but these stories, for all that they draw on deep-seated human fears, are ultimately entertainment. Its all too easy, though, for a debate like this to settle into the polarisation of good and bad technologies that science-fiction movies can encourage, with the attendant implication that, so long as we avoid the really bad ones, all will be well.

We should be more afraid of computers than we are

How do we make autonomous technological systems safe and ethical? Avoiding robot-inflicted harm to humans was the problem explored in Isaac Asimovs I, Robot, a collection of short stories so seminal that Asimovs three laws of robotics are sometimes discussed now almost as if they have the force of Isaac Newtons three laws of motion. The irony is that Asimovs stories were largely about how such well-motivated laws could be undermined by circumstances.

In any event, the ethical issues cant easily be formulated as one-size-fits-all principles. Historian Yuval Noah Harari has pointed out that driverless vehicles will need some principles for deciding how to act when faced with an unavoidable and possibly lethal collision: who does the robot try to save? Perhaps, Harari says, we will be offered two models: the Egoist (which prioritises the driver) and Altruist (which puts others first).

Mightnt a robot make a better assessment using biometrics than a frightened soldier using instincts? Terminator Genisys. Photograph: Melinda Sue Gordon/Allstar/Paramount Pictures

There are shades of science-fictional preconceptions in a 2012 report on killer robots by Human Rights Watch. Distinguishing between a fearful civilian and a threatening enemy combatant requires a soldier to understand the intentions behind a humans actions, something a robot could not do, it says. Furthermore, robots would not be restrained by human emotions and the capacity for compassion, which can provide an important check on the killing of civilians. But the first claim is a statement of faith mightnt a robot make a better assessment using biometrics than a frightened soldier using instincts? As for the second, one feels: sure, sometimes. Other times, humans in war zones wantonly rape and massacre.

This is not to argue against the reports horror at autonomous robot soldiers, which I for one share. Rather, it brings us back to the key question, which is not about technology but warfare.

Already our sensibilities about the ethics of war are arbitrary. The use of fully autonomous weapons raises serious questions of accountability, which would erode another established tool for civilian protection, says the Human Rights Watch, and it is a fair point but impossible to place in any consistent ethical framework while nuclear weapons are internationally legal. Besides, theres a continuum between drone war, soldier enhancement technologies and Laws that cant be broken down into man versus machine.

This question of automated military technologies is intimately linked to the changing nature of war itself, which, in an age of terrorism and insurgency, no longer has a start or end, battlefields or armies: as American strategic analyst Anthony Cordesman puts it: One of the lessons of modern war is that war can no longer be called war. However we deal with that, its not going to look like the D-day landings.

Warfare has always used the most advanced technologies available; killer robots are no different. Pandoras box was opened with the invention of steel smelting if not earlier (and it was almost never a woman who did the opening). And you can be sure someone made a profit from it.

By all means lets try to curb our worst impulses to beat ploughshares into swords, but telling an international arms trade that they cant make killer robots is like telling soft-drinks manufacturers that they cant make orangeade.

Philip Ball is a science writer. His latest book is The Water Kingdom: A Secret History of China


Read more: http://www.theguardian.com/us

Please follow and like us: