Autonomous / Self-driving commercial truck

Can autonomous vehicles drive ethically?

Take a second to imagine the future—a future complete with self-driving vehicles equipped with artificial intelligence (AI) capable of driving you to work, school, or vacation. For some of us, that reality may seem like a dream, and for others, a strange science fiction movie. Either way, what happens if something goes wrong during one of those future trips? For instance, in the case of an autonomous car with faulty brakes, how does it decide the best course of action?

Right now, the answer seems to be to ask as many people as possible—more specifically, as many people who will answer research questions about how to handle the situation. Would you choose to kill the passenger of a runaway car or the pedestrian in the crosswalk it is approaching? What if the passenger had a criminal record and the pedestrian was homeless? What if the passenger was a doctor and a pregnant woman was in the crosswalk? What if you had to choose between killing a cat or a dog or two cats and a dog or vice versa?

These questions must be answered to automate vehicles and to find out how that’s being done, try Moral Machine. Using the simulator, researchers developed a voting-based system for “automating ethical decisions, drawing on machine learning and computational social choice,” according to a recent study.

The data collected from Moral Machine scenarios allowed researchers to determine a “general approach” to allow AI to “deduce the preferences of all voters over (a particular situation), and apply a voting rule to aggregate these preferences into a collective decision.” Researchers are indeed crowd sourcing a modern version of the classic trolley problem as a step toward creating ethical autonomous vehicles.

As Beasley Allen has previously reported, autonomous vehicles will be a reality not too far in the future. It’s a reality that will test the limits of technology, product safety, and the law, raising questions about how reliable autonomous vehicles will be—and for whom. After all, the Moral Machine’s options will quickly make cultural bias apparent. However, some are quick to point out the goal isn’t to make AI ethical.

James Grimmelmann, a professor at Cornell Law School who studies the relationships between software, wealth, and power, told The Outline: “It makes the AI ethical or unethical in the same way that large numbers of people are ethical or unethical.” The goal may not be purely ethical driving, but human-like driving and all the gray area that brings.

Of course, just because Moral Machine only gives us an A or B option for the vehicles does not mean we shouldn’t strive for a C option with fewer casualties—a better option than perhaps a human could quickly deduce. Painstaking detail must be taken to ensure the future we create for ourselves is as safe as possible.

As Beasley Allen attorney Chris Glover pointed out in his article on automated truck platooning, “Hastily rushing a new product to market frequently yields less than desired results and often sacrifices consumer safety.” While the federal government and many states take steps toward clearing the way for self-driving vehicles, AI should be no exception to safety standards. You may have to give some thought to the trolley problem in the meantime.

Sources:
Moral Machine
The Outline
MIT Study: A Voting-Based System for Ethical Decision Making
Beasley Allen

Free Case Evaluation

Since 1979, Beasley Allen has been committed to “helping those who need it most.” Our attorneys have helped thousands of clients get the justice they desperately needed and deserved. You pay us nothing if we do not win for you. Contact us today for a free case evaluation.

For Disclaimers, see our Terms of Use.

Free Case Evaluation Full - Updated

"*" indicates required fields

This field is for validation purposes and should be left unchanged.