Article Image

IPFS News Link • Science, Medicine and Technology

What Self-Driving Cars Could Learn From Air Traffic Control

• https://www.fastcodesign.com

In a city 15 years from now, someone will cross the street unexpectedly and step out in front of a self-driving car. The car will be forced to make a split-second decision–swerve into a lamp post on the end of the curb, potentially injuring its passenger? Or stay the course and collide with the pedestrian, injuring him, or even taking his life?

How to design for this ethical dilemma is a question plaguing automakers and autonomous vehicle companies right now. So for our series ProvocationCo.Design posed this scenario to several design firms and challenged them to come up with a solution. One firm created a moral steering wheel; another imagined a flying inflatable airbag that would fling itself between the car and pedestrian. The Seattle-based firm Artefact has created a connected digital safety grid, where infrastructure sensors, pedestrians' devices, and the whole network of self-driving cars work together as a system to ensure that the cars themselves never have to make any kind of moral decision at all.

"We can't even get people to make moral decisions in a way that's reliable," says executive creative director John Rousseau. "Our approach is to acknowledge the messiness of it. We take for granted all kinds of systems today that are wildly imperfect. It's just part of humans' relationship with technology. I think it's far more productive to think about designing a more perfect system than embodying those systems with human judgment. "

Artefact's concept relies on the idea that our world 15 years from now will be vastly more connected than it is today and there will be a proliferation of digital devices–be they sensors or cameras or smartphones–that can provide a huge amount of data to a giant, artificially intelligent computer behind the scenes. This AI can then coordinate all the data, communicate it back out to the system, and ensure that if someone even veers toward the street unexpectedly, the entire system is aware of it and can react in such a way that no one gets hurt. While the concept is light on details, that could look like pinging pedestrians' phones (or augmented-reality glasses) or instructing all the cars on the road to move in tandem to avoid the pedestrian. The system may also have barriers to separate all but the most determined pedestrian from self-driving car traffic, or very slow speed limits that virtually eliminate the risk of seriously injuring someone.

But doesn't the question of making moral decisions about who lives and who dies get passed to that AI? Not necessarily. Rousseau likens the concept to a futuristic version of an air traffic control room, where humans using computers make decisions about the exact order and timing of planes landing and taking off–a true feat of logistics, and one that rarely fails or causes injuries. That's how he envisions the computer in the background that's running this digital safety grid. It's not making ethical decisions at all, just logistical ones. "There has to be a systemic view and an intelligence that's modulating the behavior of some of these things," he says. "I'd argue that air traffic controllers are making algorithmic decisions, not moral ones."


JonesPlantation