People have been raging against the machine in Chandler, where Google-affiliate Waymo has been testing self-driving cars for the last couple of years.
According to a popular New York Times article, which pilfered material from an Arizona Republic article by Ryan Randazzo, Chandler police have recorded more than 20 incidents in which people have thrown rocks at autonomous Waymo cars, punctured a tire, swerved toward them, and in one case even pointed a gun at one.
But one thing all these Waymo cars had in common: They all had backup drivers behind the wheel, if not passengers. In our humble opinion, criminals, deplorables, pranksters, drunks, and fun-loving college students are even more likely to mess with autonomous vehicles if they believe no one’s inside them at all.
Other companies have begun testing in Arizona, including TuSimple, which this week announced it would increase its number of self-driving semi-trucks in the state to 40, (all with backup drivers). Self-driving vehicles with no backup drivers or no one inside at all might multiply like locusts in the next few years, increasing the opportunities to mess with them.
Why would someone screw with the operations of an autonomous vehicle? Irrelevant. The day of attacking driverless vehicles has already arrived. It’s safe to assume such cases will become commonplace.
In fact, university researchers have studied this very issue and discovered several Achilles heels for autonomous vehicles, as we’ll explain. They and resistance pioneers like the Chandler rock throwers will help people adapt to a world of autonomous cars, flying drones, and sidewalk creepers that experts say is inevitable, by helping define the limits of human tolerance. They’ll also make artificial intelligence even better, as computers learn to avoid the unpredictable, lesser angels of our nature.
Below are a dozen ways people could mess with fully autonomous vehicles — or, to be more accurate, ways that autonomous vehicles can be messed with. These are not suggestions for actual behavior, clearly, because they may break the law or cause someone to be injured. Other methods may be harmless:
Get in, but not out. Autonomous vehicles at intersections presumably have their doors locked, but when an autonomous taxi is changing passengers, that’s an opportunity for an intruder to get inside. Maybe the person is a rude prankster, or maybe just drunk. Maybe the person tries to take over driving operations. What happens next? With a Waymo vehicle, the car would call its homebase, and a voice would tell the intruder to get out. But he might not, until police come. In most, if not all, autonomous vehicles, cameras are recording everything at all times, meaning the authorities might track down such offenders.
Getting punked. If no driver or riders are around, who will take the banana out of the tailpipe or help catch a prankster? Of course, people who vandalize autonomous vehicles should face criminal prosecution, and damage that interferes with the vehicle’s operation could cause a crash and death. “Autonomous vehicle vandal” might need a special criminal charge.
Hack them. As a 2017 MIT Technology Review article stated, autonomous vehicles “will have to anticipate and defend against a full spectrum of malicious attackers wielding both traditional cyberattacks and a new generation of attacks based on so-called adversarial machine learning.” The article suggested that out-of-work truckers could be among these future super-villains. Hacking “wizards” Charlie Miller and Chris Valase said last year that future robocars will be less prone to hacking than people imagine, but that it’s crucial to build vehicles with embedded security technology and other precautions, like avoiding remote uploads when possible.
Trap them. Jalopnik’s Jason Torchinsky mentioned this method in a 2013 article, suggesting that the vehicles would be stymied by humans intentionally standing in their way, or by placing pylon cones near them. What would an autonomous vehicle do if you put orange cones in front and behind it? Would it know to simply run them over and continue?
Fool them with new road stripes. Taken to the extreme, this could be very dangerous. If certain portions of the road stripes were covered up, and new ones were added, would some autonomous vehicles follow the new “road” off a cliff? In 2017, a British artist seemed to show on video that if a driverless car drove into a circle with dashed white lines on the outside, and a solid white line in the inside, it would stop cold, unable to determine how to get out.
Alter road signs to fool computers, but not humans. University of Washington computer-security researcher Yoshi Kohno showed in 2017 that if you know the algorithms that help the computers in driverless cars process their detection data, the computers can be easily fooled. In a spooky demonstration of the potential weakness in self-driving cars, strips of black-and-white tape on a stop sign caused a lab-based autonomous system to see it as a 45-mph speed-limit sign.
Drive aggressively around them. This one will come instinctively to most motorists as they’re forced to cohabit the roads with robo-vehicles. Common courtesy could go out the window when it comes to driverless vehicles, because in theory, the autonomous vehicle won’t feel road rage or even the slightest bit snubbed, and will always be the one to back off. A 2016 online survey by Goodyear and the London School of Economics spanning 11 countries concluded that some drivers “see an opportunity to take advantage of, or ‘bully,'” driverless vehicles. As one Brit put it, the AVs are “going to stop. So you’re going to mug them right off. They’re going to stop and you’re just going to nip round.”
The “feint.” This sort of attack was mentioned briefly in the Republic’s December 11 report: Police received a report of a “bike swerving dangerously at a [Waymo] van.” Pedestrians could step out in front of autonomous vehicles, or nearly so, to test their “reflexes.” Don’t think someone would do that? Maybe you haven’t seen the video of the guy jumping into cholla cactus. But the main reason someone may feint — or create a distraction before executing a move — is during the normal course of driving, using subtle driving aggressions to treat driverless like timid teens.
Leave a foul mess in them. It’s easy to predict that people of means in the future will drive kickass, luxury autonomous vehicles. The masses may interact with driverless taxis more often, and these won’t be as pretty. All it takes is one unbathed, diarrhetic paint-sniffer to take a ride, or a prankster with a bent toward the disgusting, and the AV taxi is kaput for the day. How many chickens can fit inside an autonomous vehicle?
Cover their detectors. In many autonomous vehicles, the electronic eyes and other detection equipment, like lidar units, are on the outside, making them convenient targets for troublemakers. Duct tape, spray paint, Post-it notes, even mud, would likely disable a driverless car, and no driver would be present to get out, yell at the offending punk, and remove the stickers or debris.
Play siren sounds near them. Two years ago, Chandler public safety officials helped Waymo vehicles learn what the sirens of local first-responder vehicles sounded like, and how the autonomous vehicles detected them. Autonomous vehicles of the future may be programmed to pull over if they heard a siren getting closer to them, possibly allowing motorists with burly loudspeakers to help clear the pesky robo-cars out of the way.
Order a fleet of robo-taxis to the same location. With the help of friends, and using multiple robo-taxi companies, pranksters could “command” a dozen or more vehicles for a short time. Brought together, maybe the vehicles could be made to compete for pullout spots or engage in a crash derby. Or maybe it would just be fun to see 20 vehicles pull up to a friend’s house at the same time, then block off the street with pylons or humans (see above.)
Let’s just hope that AI doesn’t get so good that our vehicles start pranking back.