Hackers are the real obstacle for self-driving vehicles
Volvo Group
Before autonomous trucks and taxis hit the road, manufacturers will need to solve problems far more complex than collision avoidance and navigation (see “10 Breakthrough Technologies 2017: Self-Driving Trucks”).
These vehicles will have to anticipate and defend against a full spectrum of malicious attackers wielding both traditional cyberattacks and a new generation of attacks based on so-called adversarial machine learning (see “AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks”).
As consensus grows that autonomous vehicles are just a few years away from being deployed in cities as robotic taxis, and on highways to ease the mind-numbing boredom of long-haul trucking, this risk of attack has been largely missing from the breathless coverage.
It reminds me of numerous articles promoting e-mail in the early 1990s, before the newfound world of electronic communications was awash in unwanted spam. Back then, the promise of machine learning was seen as a solution to the world’s spam problems. And indeed, today the problem of spam is largely solved—but it took decades for us to get here.
There have been no reports to date of hostile hackers targeting self-driving vehicles. Ironically, though, that’s a problem. There were no malicious attackers when the dot-com startups in the 1990s developed the first e-commerce platforms, either. After the first big round of e-commerce hacks, Bill Gates wrote a memo to Microsoft demanding that the company take security seriously.
The result: today Windows is one of the most secure operating systems, and Microsoft spends more than a billion dollars annually on cybersecurity. Nevertheless, hackers keep finding problems with Windows operating systems, Web browsers, and applications..
Car companies are likely to go through a similar progression. After being widely embarrassed by their failure to consider security at all—the CAN bus, designed in the 1980s, has no concept of authentication—they now appear to be paying attention.
When hackers demonstrated that vehicles on the roads were vulnerable to several specific security threats, automakers responded by recalling and upgrading the firmware of millions of cars. Last July, GM CEO Mary Barra said that protecting cars from a cybersecurity incident “is a matter of public safety.”
YouTube/NissanNewsroom
But the efforts being made to date may be missing the next security trend. The computer vision and collision avoidance systems under development for autonomous vehicles rely on complex machine-learning algorithms that are not well understood, even by the companies that rely on them (see “The Dark Secret at the Heart of AI”).
Last year researchers at CMU demonstrated that state-of-the-art face recognition algorithms could be defeated by wearing a pair of clear glasses with a funky pattern printed on their frames. Something about the pattern tipped the algorithm in just the right way, and it thought it saw what wasn’t there. “We showed that attackers can evade state-of-the-art face recognition algorithms that are based on neural networks for the purpose of impersonating a target person, or simply getting identified incorrectly,” lead researcher Mahmood Sharif wrote in an e-mail.
Also last year, researchers at the University of South Carolina, China’s Zhejiang University, and the Chinese security firm Qihoo 360 demonstrated that they could jam various sensors on a Tesla S, making objects invisible to its navigation system.
Tesla
Many recent articles about autonomous driving downplay or even ignore the idea that there might be active, adaptive, and malicious adversaries trying to make the vehicles crash. In an interview with MIT Technology Review, the chair of the National Transportation Safety Board, Christopher Hart, said he was “very optimistic” that self-driving cars would cut the number of accidents on the nation’s roads. In discussing safety issues, Hart focused on the need to program vehicles to make ethical decisions—for example, when an 80,000-pound truck suddenly blocks a car’s way.
Why anyone would want to hack a self-driving car, knowing that it could result in a death? One reason is that widespread deployment of autonomous vehicles is going to result in a lot of unemployed people, and some of them are going to be angry.
In August 2016, Ford CEO Mark Fields said that he planned to have fully autonomous vehicles operating as urban taxis by 2021. Google, Nissan, and others planned to have similar autonomous cars on the roads as soon as 2020. Those automated taxis or delivery vehicles could be vulnerable to being maliciously dazzled with a high-power laser pointer by an out-of-work Teamster, a former Uber driver who still has car payments to make, or just a pack of bored teenagers.
Asked about its plans for addressing the threat of adversarial machine learning, Sarah Abboud, a spokesperson for Uber, responded: “Our team of security experts are constantly exploring new defenses for the future of autonomous vehicles, including data integrity and abuse detection. However, as autonomous technology evolves, so does the threat model, which means some of today’s security issues will likely differ from those addressed in a truly autonomous environment.”
It will take only a few accidents to stop the deployment of driverless vehicles. This probably won’t hamper advanced autopilot systems, but it’s likely to be a considerable deterrent for the deployment of vehicles that are fully autonomous.
Simon Garfinkel is a science writer living in Arlington, Virginia. He is working on a new book about the history of computing.
NOW WATCH: Amazon has an oddly efficient way of storing stuff in its warehouses
No comments: