Wednesday, 24 August 2016

Autonomous vehicles: picking whom to injure

http://autoinsurancereview.tk/wp-content/uploads/2016/08/autonomous-vehicles.jpg


Autonomous vehicles: picking whom to injure


Sarah Cunningham-Scharf on August 23, 2016


autonomous-vehicles

At the beginning of May, the first known fatality involving a self-driving car occurred in Florida. The deceased driver had his Tesla Model S on autopilot while he was cruising on a divided highway, and the radar failed to recognize the big rig truck that pulled in front of him; since it was a bright day, the sensors interpreted the white truck as a road sign.


The incident has inspired contention around the ethical guidelines involved in programming driverless cars. In emergency situations like the one that killed 40-year-old driver Joshua Brown, whom should the car—assuming its sensors and radar are in good working order — choose to save: pedestrians, or its passengers?


Read: How do you program ethics into a driverless car?


Sarah Thornton is a graduate student at Stanford University’s Dynamic Design Lab in California. A mechanical engineer by designation, she researches the ethics of autonomous vehicles, and believes the companies manufacturing them should be placing greater emphasis on employing philosophical frameworks during every phase of the cars’ design process, including in the management of the company itself. That way, technologists and engineers aren’t solely responsible for the ethics guiding the cars’ computer programming.


“Having these philosophical justifications, they allow the engineer to justify their programming decisions and it shows the company put thought into the decisions they made, instead of arbitrarily choosing a value or decision,” she says.


It’s already evident that different driverless car manufacturers are going to emphasize different ethical principles. Thornton says Google is waiting to implement self-driving cars until it develops a true autonomous vehicle — one where a steering wheel isn’t necessary. Comparatively, Tesla and other companies emphasize the autopilot feature, where the driver is being assisted rather than replaced.


Read: What’s it really like in the driver’s seat of a driverless car?


By far the most popular framework considered in relation to ethical programming of these cars is utilitarianism, Thornton says. This philosophical viewpoint states that the result that benefits the most people is the correct decision. An article published in the journal Science in June outlined the results of a series of surveys that posed questions such as whether autonomous vehicles should prioritize the lives of its passengers or pedestrians.


The majority of respondents chose utilitarianism; if the car was going to kill 10 pedestrians versus one driver, save the pedestrians. But, respondents also admitted they would never purchase a car that placed a greater value on those outside the vehicle; they would only buy an autonomous car that protected the driver.


Thornton says, “That phenomenon is actually found in many other instances. It’s a problem of first-party versus third party ethics; as a third-party you say this is great, but when you pose it to yourself as a first-party person, you don’t want to ride in that [car].”


Concerns about jumping on the autonomous vehicle bandwagon are legitimate, she continues. “People are scared of robots and robot overlords; they don’t want them taking over. But I also think it’s justified that this is new technology and we should be taking a closer eye on it as we introduce it to the greater public. For some reason, people think we need to rush to market with it, but we can take our time when making sure it is safe for everyone involved.”


__________________________________________________________________________
Copyright 2016 Rogers Publishing Ltd. This article first appeared in the Fall 2016 edition of Corporate Risk Canada magazine



Source link



Autonomous vehicles: picking whom to injure

No comments:

Post a Comment