Advertisement

Can a Connected Car Ever Be Safe from Hacking?

Charlie Miller made headlines in 2015 when he and another cybersecurity researcher, Chris Valasek, remotely hacked into a Jeep Cherokee as it was being driven down the highway for a story in Wired magazine. (The Cherokee’s driver, a Wired writer, knew what was happening.) Having made no prior physical contact with the vehicle, Miller and Valasek used the Jeep’s Uconnect cellular connection to access its computers and take control of its acceleration, braking, and, during a later test, even its steering—all from Miller’s couch. The most chilling aspect of the attack? It would have worked on any vehicle in the United States running that version of the Uconnect software. Their findings led to a recall of 1.4 million Chrysler, Dodge, Jeep, and Ram vehicles to have the vulnerability rectified. Miller has led driverless-vehicle cybersecurity at Uber, and both he and Valasek recently accepted positions with Cruise Automation, the GM subsidiary focused on the development of computer-driven cars.

C/D: Do you see a clear path to truly securing a shared driverless car?

CM: Yeah, for sure. I mean, that’s my job, to do just that. If I can’t do that, we’re in trouble. There’s nothing special about the computers that autonomous cars are using, the hardware, or the software. None of that stuff is fundamentally different security-wise than securing networks or computers that are sitting in a corporate network. The only difference is that, if you mess up and a hacker gets in, the consequences are much more severe. But no new defensive techniques need to be invented. We just need to systematically apply what we know, take our time, and do it right.

ADVERTISEMENT

C/D: That sounds fair enough, but you’ve read the headlines. So many companies and government entities fail to secure their networks. Are they simply failing to implement these best practices?

CM: It’s a little different than that. In some respects, securing a car is easier than securing a network of 20,000 laptops run by 20,000 people who are carrying them around, going to coffee shops, clicking on things, and surfing the web. A car at least is predictable. It’s still a lot of computers, but they’re used in predictable ways and we have a lot of control over the environment.

C/D: But as we advance toward new ownership models, we’ll have countless people with direct access to shared cars.

CM: For sure. People will have access to the vehicle for extended periods of time, and you need to design security into that. But the advantage you have is that, unlike a car company that’s making a car and has to secure it, if you’re talking about autonomous cars run by ride-­sharing firms, you’ve got cars that come home at night. We can examine them, check them, and update them. Where a normal car company sells a car it may never see again, we will have the advantage of knowing where the cars are. So we can monitor them and bring them in, we can update them whenever we feel like it, we can add new hardware. There are a lot of things we can do to help ourselves.

C/D: In a full-on connected-car future, wouldn’t you have assorted vulnerabilities in the infrastructure as well?

CM: Well, presumably the infrastructure would all be talking to each other, and so each device is a potential point of attack. But the car’s computer needs to be smart enough to use multiple sources of data. You never want to rely completely on one sensor or one type of data.

C/D: Is it realistic or even necessary to adopt a set of standards to regulate the implementation of these practices and technologies?

CM: Standards are one thing and might be helpful to make sure everybody is doing at least the right general things, but in the end, it’s going to come down to each particular automaker and supplier doing everything right, checking each other, testing things, and getting third-party validation. Standards definitely are not going to solve the problem but might be the first step in the right direction.

C/D: Is it reasonable to fear hackers taking control of the whole fleet and, say, have all cars make a 90-degree left simultaneously?

CM: We could have done that in the Jeep, so yeah, that’s realistic.

C/D: Could you have done it to multiple cars at the same time?

CM: Absolutely. At any one time, all the cars that were on were vulnerable. Not all 1.4 million that were recalled are on at any given time, but all that were on [during our test], we could have scanned, found them, and made them do things. You could scan the internet, find them, and make them do things. It would take some preparation—you can’t instantly scan and find 100,000 cars in one second, but you could scan over time, find them, and then once you knew where they were, and maybe you’ve programmed some part of your attack ahead of time, you could push a button and, yeah, they’d all do something.

C/D: How long was it from when you got your hands on a Jeep to the time you were able to implement that hack?

CM: It took a couple of months to even know there was a problem. But to get from end to end, where we exploited the vulnerability and we could do the most dangerous kinds of attacks, including turning the steering wheel at speed, that took close to two years. Of course, this wasn’t our day job; we did it on weekends and evenings, so we could have gone faster. So maybe with more people . . . But in reality, it took close to two years, and so it’s not something that some kids are going to do overnight or in a weekend. It is difficult, but at the same time, you don’t ever want two guys in their basement to figure out how to do that.

C/D: Are there any practical steps the industry should be taking that it isn’t?

CM: The one I’d love to see is more transparency about what steps they are taking. They say they’re working together, but I’d love to see academic people and researchers helping them as well. Security should not be a competitive advantage in any market, especially one that has to do with physical safety. The good news is, we’re not going to have an autonomous car tomorrow or next week. Nobody knows when it’s going to be, but we have time to get it right. We’re in a good position now to do it.

C/D: Is it possible to make a system unhackable?

CM: No. Nobody knows how to do that. But what you can do is make it so hard that nobody cares enough to do it.

C/D: But when the potential for mayhem is so high . . .

CM: I guess. But when you make it so hard that only someone with a billion dollars can do it . . . A person with a billion dollars can cause a lot of mayhem in a lot of ways. Spending that to hack into autonomous cars probably isn’t number one on that list. Hacking cars is hard.

Thinking Tank

The cybersecurity threat ratchets even higher when computer-driven vehicles go to war. Dariusz Mikulski is a ground-vehicle robotics research scientist for the U.S. Army Tank Automotive Research, Development, and Engineering Center, which plans to have driverless supply vehicles in war zones within 10 to 15 years. —Eric Tingwall

C/D: What’s at stake if the military doesn’t get cybersecurity right?

DM: We have a lot of wireless devices . . . radios, radar, lidar, short-range and long-range communications, satellites, exposed ports, and we have supply-chain issues. We have a lot of different potential avenues that someone could exploit and have complete control of the entire vehicle.

C/D: Does the threat change when you’re fighting insurgents rather than a foreign government? Are they more or less capable of pulling off a cyber attack?

DM: With cyber, there are low barriers to entry. People can learn it pretty easily. It’s somewhat cheap to get involved in. That’s not to say that it’s easy to break into one of our systems. Even though there may be tools available and people may shop the knowledge, there’s a lot of domain-specific knowledge that’s not publicly available.

C/D: If you can’t make a computer system unhackable, what’s the best defense?

DM: The whole point of doing cybersecurity is that you want to create a bigger time buffer between when an adversary begins and when they actually get to your crown jewels. You want to make the time buffer so big that by the time they actually get to where they want to get, it doesn’t matter anymore. The data is obsolete, it’s irrelevant. If it takes an adversary six months to get into your system, but you can patch it in a couple of hours, that’s a big win from a cybersecurity standpoint.

Auto•No•Mo'•Us: Return to Full Coverage