Article Image

IPFS News Link • Technology: Software

The Terminator Scenario: Are We Giving Our Military Machines Too Much Power?

• Ben Austen via PopSci.com
 
Cardiorobotics Cardioarm
John B. Carnett
 
 
 
 
 
Last August, U.S. Navy operators on the ground lost all contact with a Fire Scout helicopter flying over Maryland. They had programmed the unmanned aerial vehicle to return to its launch point if ground communications failed, but instead the machine took off on a north-by-northwest route toward the nation’s capital. Over the next 30 minutes, military officials alerted the Federal Aviation Administration and North American Aerospace Defense Command and readied F-16 fighters to intercept the pilotless craft. Finally, with the Fire Scout just miles shy of the White House, the Navy regained control and commanded it to come home. “Renegade Unmanned Drone Wandered Skies Near Nation’s Capital,” warned one news headline in the following days. “UAV Resists Its Human Oppressors, Joyrides over Washington, D.C.,” declared another.

The Fire Scout was unarmed, and in any case hardly a machine with the degree of intelligence or autonomy necessary to wise up and rise up, as science fiction tells us the robots inevitably will do. But the world’s biggest military is rapidly remaking itself into a fighting force consisting largely of machines, and it is working hard to make those machines much smarter and much more independent. In March, noting that “unprecedented, perhaps unimagined, degrees of autonomy can be introduced into current and future military systems,” Ashton Carter, the U.S. undersecretary of defense for Acquisition, Technology and Logistics, called for the formation of a task force on autonomy to ensure that the service branches take “maximum practical advantage of advances in this area.”

In Iraq and Afghanistan, U.S. troops have been joined on the ground and in the air by some 20,000 robots and remotely operated vehicles. The CIA regularly slips drones into Pakistan to blast suspected Al Qaeda operatives and other targets. Congress has called for at least a third of all military ground vehicles to be unmanned by 2015, and the Air Force is already training more UAV operators every year than fighter and bomber pilots combined. According to “Technology Horizons,” a recent Air Force report detailing the branch’s science aims, military machines will attain “levels of autonomous functionality far greater than is possible today” and “reliably make wide- ranging autonomous decisions at cyber speeds.” One senior Air Force engineer told me, “You can envision unmanned systems doing just about any mission we do today.” Or as Colonel Christopher Carlile, the former director of the Army’s Unmanned Aircraft Systems Center of Excellence, has said, “The difference between science fiction and science is timing.”


We've gathered some of the most powerful and fear-inducing robots to date in this gallery. Click the thumbnails to see more ways bots are getting faster, smarter and more lethal.

We are surprisingly far along in this radical reordering of the military’s ranks, yet neither the U.S. nor any other country has fashioned anything like a robot doctrine or even a clear policy on military machines. As quickly as countries build these systems, they want to deploy them, says Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield in England: “There’s been absolutely no international discussion. It’s all going forward without anyone talking to one another.” In his recent book Wired for War: The Robotics Revolution and Conflict in the 21st Century, Brookings Institution fellow P.W. Singer argues that robots and remotely operated weapons are transforming wars and the wider world in much the way gunpowder, mechanization and the atomic bomb did in previous generations. But Singer sees significant differences as well. “We’re experiencing Moore’s Law,” he told me, citing the axiom that computer processing power will double every two years, “but we haven’t got past Murphy’s Law.” Robots will come to possess far greater intelligence, with more ability to reason and self- adapt, and they will also of course acquire ever greater destructive power. So what does it mean when whatever can go wrong with these military machines, just might?

I asked that question of Werner Dahm, the chief scientist of the Air Force and the lead author on “Technology Horizons.” He dismissed as fanciful the kind of Hollywood-bred fears that informed news stories about the Navy Fire Scout incident. “The biggest danger is not the Terminator scenario everyone imagines, the machines taking over—that’s not how things fail,” Dahm said. His real fear was that we would build powerful military systems that would “take over the large key functions that are done exclusively by humans” and then discover too late that the machines simply aren’t up to the task. “We blink,” he said, “and 10 years later we find out the technology wasn’t far enough along.”

Dahm’s vision, however, suggests another “Terminator scenario,” one more plausible and not without menace. Over the course of dozens of interviews with military officials, robot designers and technology ethicists, I came to understand that we are at work on not one but two major projects, the first to give machines ever greater intelligence and autonomy, and the second to maintain control of those machines. Dahm was worried about the success of the former, but we should be at least as concerned about the failure of the latter. If we make smart machines without equally smart control systems, we face a scenario in which some day, by way of a thousand well-intentioned decisions, each one seemingly sound, the machines do in fact take over all the “key functions” that once were our domain. Then “we blink” and find that the world is one we no longer are able to comprehend or control.

Low-Hanging Fruit

Today soldiers and airmen can see that the machines are becoming their equals or betters, at least in some situations. Last summer, when I visited the Air Force Research Laboratory at Wright-Patterson Air Force Base near Dayton, Ohio, scientists there showed me a video demonstration of a system under development, called Sense and Avoid, that they expect to be operational by 2015. Using a suite of onboard sensors, unmanned aircraft equipped with this technology can detect when another aircraft is close by and quickly maneuver to avoid it. Sense and Avoid could be used in combat situations, and it has been tested in computer simulations with multiple aircraft coming at the UAV from all angles. Its most immediate benefit, however, might be to offer proof that UAVs can be trusted to fly safely in U.S. skies. The Federal Aviation Administration does not yet allow the same UAVs that move freely in war zones to come anywhere near commercial flights back home and only very rarely allows them to fly even in our unrestricted airspace. But Sense and Avoid algorithms follow the same predictable FAA right-of-way rules required of all planes. At one point in the video, which depicted a successful test of the system over Lake Ontario, a quote flashed on the screen from a pilot who had operated one of the oncoming aircraft: “Now that was as a pilot would have done it.”

Machines already possess some obvious advantages over us mere mortals. UAVs can accelerate beyond the rate at which pilots normally black out, and they can remain airborne for days, if not weeks. Some military robots can also quickly aim and fire high-energy lasers, and (in controlled situations) they can hit targets far more consistently than people do. The Army currently uses a squat, R2-D2–like robot called Counter Rocket, Artillery and Mortar, or C-RAM, that employs radar to detect incoming rounds over the Green Zone or Bagram Airfield and then shoot them down at a remarkable rate of 70 percent. The Air Force scientists also spoke of an “autopilot on steroids” that could maximize fuel efficiency by drawing on data from weather satellites to quickly modify a plane’s course. And a computer program that automatically steers aircraft away from the ground when pilots become disoriented is going live on F-16s later this year.

For the moment, the increase in machine capability is being met with an increase in human labor. The Air Force told Singer that an average of 68 people work with every Predator drone, most of them to analyze the massive amount of data that each flight generates. And as the Pentagon deploys ever more advanced imaging systems, from the nine-sensor Gorgon Stare to the planned 368-sensor ARGUS, the demand for data miners will continue to grow. Because people are the greatest financial cost in maintaining a military, though, the Pentagon is beginning to explore the use of “smart” sensors. Drawing on motion-sensing algorithms, these devices could decide for themselves what data is important, transmitting only the few minutes when a target appears, not the 19 hours of empty desert.

Ronald Arkin, the director of the Mobile Robot Laboratory at the Georgia Institute of Technology, hypothesizes that, within certain bounded contexts, armed robots could even execute military operations more ethically than humans. A machine equipped with Arkin’s prototype “ethical governor” would be unable to execute lethal actions that did not adhere to the rules of engagement, minimize collateral damage, or take place within legitimate “kill zones.” Moreover, machines don’t seek vengeance or experience the overwhelming desire to protect themselves, and they are never swayed by feelings of fear or hysteria. I spoke to Arkin in September, just a few days after news broke that the Pentagon had filed charges against five U.S. soldiers for murdering Afghan civilians and mutilating their corpses. “Robots are already stronger, faster and smarter,” he said. “Why wouldn’t they be more humane? In warfare where humans commit atrocities, this is relatively low-hanging fruit.”

PirateBox.info