Article Image

IPFS News Link • Robots and Artificial Intelligence

It's Official: Robots Are Now Injuring And Killing Humans

• http://www.thedailysheeple.com

Even as robotics experts, universities and tech luminaries sound the alarm about the potential for a future filled with killer robots, this technology evidently already has arrived … minus the stringent ethics.

There can be no debate about the use of killer robots in the military. We are in the middle of an admitted global drone arms race in which there are mass civilian casualties occurring under a specious U.S. legal framework that has thus far not even excluded its own citizens from being targeted for assassination. There is also a host of ground- and sea-based systems that are equally lethal and continue to be tested in massive joint military exercises.

Time after time we have seen that military developments – normally first employed "over there" – eventually trickle down into the everyday lives of citizens at home. Sometimes the technology is beneficial, most often it is not.  When weapons of war are handed down to local police, for example, we can't be shocked when we are soon having debates about "militarized police" in the United States.

Artificial intelligence is another area that that has ushered in dual-use technologies such as drone surveillance and warfare, robot security guards and police, and self-driving vehicles for military and for civilians. In all cases we are now seeing disturbing misapplications as well as outright system failures. In fact, we are seeing many "firsts" that are threatening to become a trend if not quickly addressed and reined in.

Just a few weeks ago we witnessed the first human death reported from a self-driving vehicle when Tesla's autopilot sensors failed to detect an oncoming tractor trailer, killing the test driver. Previously, there were ominous signs of this potential when Google's self-driving cars first had failures that resulted in them being hit, but later actually caused an accident with a bus.

Aside from the technical challenges, questions have been raised about the ethics and morality that will be required in certain fatal situations. That area, too, is raising eyebrows – is it right to sacrifice the lives of some to save others?

The standards are already becoming morally complex. Google X's Chris Urmson, the company's director of self-driving cars, said the company was trying to work through some difficult problems. Where to turn – toward the child playing in the road or over the side of the overpass?

Google has come up with its own Laws of Robotics for cars: "We try to say, 'Let's try hardest to avoid vulnerable road users, and beyond that try hardest to avoid other vehicles, and then beyond that try to avoid things that that don't move in the world,' and then to be transparent with the user that that's the way it works," Urmson said. (Source)

These incidents and dilemmas have thus far occurred during training and testing, which might mitigate some of the seriousness, but nonetheless points to some genuine flaws that should preclude these vehicles from being widely employed.

As a quick aside, it's essential to keep in mind that Isaac Asimov offered the world his "Three Laws of Robotics" in 1942; although they appeared in fiction, they since have been widely acknowledged within mainstream robotics, and in their original form are of such simple perfection that there is really no excuse for the errors that we are seeing:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

www.universityofreason.com/a/29887/KWADzukm