Tuesday, September 29, 2020
Home Industry News AI for self-driving cars has no time for crime

AI for self-driving cars has no time for crime

Existing approaches to artificial intelligence for self-driving cars don’t account for the fact that people might try to use autonomous vehicles to do something bad, researchers report.

For example, let’s say there’s an autonomous vehicle with no passengers and it’s about to crash into a car containing five people. It can avoid the collision by swerving out of the road, but it would then hit a pedestrian.

Most discussions of ethics in this scenario focus on whether the autonomous vehicle’s AI should be selfish (protecting the vehicle and its cargo) or utilitarian (choosing the action that harms the fewest people). But that either/or approach to ethics can raise problems of its own.

Moral Judgement

“Current approaches to ethics and autonomous vehicles are a dangerous oversimplification—moral judgment is more complex than that,” says Veljko Dubljević, an assistant professor in the Science, Technology & Society (STS) program at North Carolina State University and author of a paper outlining this problem and a possible path forward.

“For example, what if the five people in the car are terrorists? And what if they are deliberately taking advantage of the AI’s programming to kill the nearby pedestrian or hurt other people? Then you might want the autonomous vehicle to hit the car with five passengers.

“In other words, the simplistic approach currently being used to address ethical considerations in AI and autonomous vehicles doesn’t account for malicious intent. And it should.”

An Agent-Deed-Consequence Approach

As an alternative, Dubljević proposes using the so-called Agent-Deed-Consequence (ADC) model as a framework that AIs could use to make moral judgements. The ADC model judges the morality of a decision based on three variables.

First, is the agent’s intent good or bad? Second, is the deed or action itself good or bad? Lastly, is the outcome or consequence good or bad? This approach allows for considerable nuance.

For example, most people would agree that running a red light is bad. But what if you run a red light in order to get out of the way of a speeding ambulance? And what if running the red light means that you avoided a collision with that ambulance?

“The ADC model would allow us to get closer to the flexibility and stability that we see in human moral judgment, but that does not yet exist in AI,” says Dubljević.

More Research Required

“Here’s what I mean by stable and flexible. Human moral judgment is stable because most people would agree that lying is morally bad. But it’s flexible because most people would also agree that people who lied to Nazis in order to protect Jews were doing something morally good.

“But while the ADC model gives us a path forward, more research is needed,” Dubljević says. “I have led experimental work on how both philosophers and lay people approach moral judgment, and the results were valuable. However, that work gave people information in writing. More studies of human moral judgment are needed that rely on more immediate means of communication, such as virtual reality, if we want to confirm our earlier findings and implement them in AVs.

“Also, vigorous testing with driving simulation studies should be done before any putatively ‘ethical’ AVs start sharing the road with humans on a regular basis. Vehicle terror attacks have, unfortunately, become more common, and we need to be sure that AV technology will not be misused for nefarious purposes.”

• Matt Shipman specializes in research communication and media relations for NC State, including writing, editing, content strategy, media outreach, and social media management. This article was originally published on Futurity

Advertisement
Matt Shipman
Matt Shipman
Matt Shipman specializes in research communication and media relations for NC State, including writing, editing, content strategy, media outreach, and social media management.

Stay Connected

Join Our Newsletter

Must Read

What we learned from listening to 1.5 million robocalls on 66,000 phone lines

More than 80% of robocalls come from fake numbers – and answering these calls or not has no effect on how many more you’ll...

Deep learning AI stuns scientists with poetry and journalism

Seven years ago, my student and I at Penn State built a bot to write a Wikipedia article on Bengali Nobel laureate Rabindranath Tagore’s...

Spooky quantum breakthrough could change physics forever

MIP* = RE is not a typo. It is a groundbreaking discovery and the catchy title of a recent paper in the field of...

Our solar system’s four most promising worlds for alien life

The Earth’s biosphere contains all the known ingredients necessary for life as we know it. Broadly speaking these are: liquid water, at least one...

Related News

What we learned from listening to 1.5 million robocalls on 66,000 phone lines

More than 80% of robocalls come from fake numbers – and answering these calls or not has no effect on how many more you’ll...

Deep learning AI stuns scientists with poetry and journalism

Seven years ago, my student and I at Penn State built a bot to write a Wikipedia article on Bengali Nobel laureate Rabindranath Tagore’s...

Spooky quantum breakthrough could change physics forever

MIP* = RE is not a typo. It is a groundbreaking discovery and the catchy title of a recent paper in the field of...

Our solar system’s four most promising worlds for alien life

The Earth’s biosphere contains all the known ingredients necessary for life as we know it. Broadly speaking these are: liquid water, at least one...

Tragic visions of tech billionaires are shaping the human world

In the 20th century, politicians’ views of human nature shaped societies. But now, creators of new technologies increasingly drive societal change. Their view of...

This site uses Akismet to reduce spam. Learn how your comment data is processed.