329 1 ledlights.blog

How can autonomous vehicles be tricked into dangerous stops?  

play icon ledlights.blog Listen to this article
Reading Time: 2 minutes
spotify badge ledlights.blog

A faulty decision by the collision-avoidance system in a driverless car in motion can lead to a disaster, but it is not the end of the world. The University of California, Irvine, identifies one of the possible risks: Con artists trick AVs into an abrupt halt or place an ordinary object on the roadside.

“A box, bicycle, or traffic cone may be all that is necessary to scare a driverless vehicle into coming to a dangerous stop in the middle of the street or on a freeway off-ramp, creating a hazard for other motorists and pedestrians,” said Qi Alfred Chen, UCI professor of computer science and co-author of a paper on the subject presented recently at the Network and Distributed System Security Symposium in San Diego.

It is not practical for the vehicles to distinguish between objects that can be present on the road due to a pure accident or those intentionally left by those intentionally denying any service attack. “Both can cause erratic driving behavior.” Said Chen.

The planning module, a section of the software that governs autonomous driving systems, is the focus of Chen and his team’s inquiry into security flaws.

The different functions, such as cruising, changing lanes, or slowing down and stopping, are governed by this component to decide how the vehicle will respond.

“The vehicle’s planning module is designed with an abundance of caution, logically, because you don’t want driverless vehicles rolling around, out of control,” said lead author Ziwen Wan, UCI Ph.D. student in computer science. “But our testing has found that the software can err on the side of being overly conservative, and this can lead to a car becoming a traffic obstruction, or worse.”

A testing tool known as PlanFuzz, designed by the researchers at UCI’s Donald Bren School of Information and Computer Science, is widely used by automated driving systems to automatically detect vulnerabilities.

The team used PlanFuzz to evaluate three different behavioural planning implementations of the open-source, industry-grade autonomous driving systems Apollo and Autoware.

The experiment conducted by the researchers resulted in the mislead by placing cardboard boxes and bicycles on the side of the road with the same output; the vehicle stopped permanently in empty thoroughfares and intersections.

In another round of tests, a nonexistent threat was neglected by an autonomously driven car to change the lane as planned.

“Autonomous vehicles have been involved in fatal collisions, causing great financial and reputation damage for companies such as Uber and Tesla, so we can understand why manufacturers and service providers want to lean toward caution,” said Chen. “But the overly conservative behaviors exhibited in many autonomous driving systems stand to impact the smooth flow of traffic and the movement of passengers and goods, which can also have a negative impact on businesses and road safety.”

On this NSF-funded project, Junjie Shen, a UCI Ph.D. computer science student, Jalen Chuang, a UCI undergraduate computer science student, Xin Xia, a UCLA postdoctoral scholar in civil and environmental engineering, Joshua Garcia, a UCI assistant professor of informatics, and Jiaqi Ma, a UCLA associate professor of civil and environmental engineering, collaborated with Chen and Wan.

Source: University of California, Irvine

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.