Hacker News new | past | comments | ask | show | jobs | submit login
Gone in six seconds? Exploiting car alarms (pentestpartners.com)
95 points by alphabetter on March 8, 2019 | hide | past | favorite | 37 comments



I had a Viper alarm with these features installed in my car back in 2012 and immediately noticed that while their iOS app used SSL to talk to the API, it never actually validated the certificate, and was trivial to set up a man-in-the-middle proxy to grab a user's auth token and make requests as them. According to their reply their devs weren't able to replicate it, which told me all I needed to know about their ability to write secure software. It's good to hear they responded quickly in this instance, but I'm not sure I'd ever trust their devices again.


> it never actually validated the certificate,

While I agree with everything above, it can be humbling to consider the huge amount of people already in control of that car (at the car company, software partner, hosting partner, phone maker) but extending that trust to the local network amounts to an inexcusable security problem.

It is interesting that having legitimate control over a certificate makes this a desired feature rather than a huge security problem. The real world may not be all that black and white.


Yeah, I agree. I probably won't own another system like this from any manufacturer as long as I can avoid it. Luckily my car came out just before all the OEMs started putting these cellular modems in them that are attached directly to the CAN bus.

I don't think it was the bug itself that bothered me so much as their response, I sent them an extremely clear email with the exact steps I took and screenshots showing how other apps responded to my fake cert with error/warning dialogs which was escalated directly to the engineering team and they seemed to have no idea what I was describing or why it was an issue. I assumed at that point the issues went a little deeper than what I had uncovered, and it seems from this post I wasn't too far off the mark.


So, vulnerable web apps exploited to attack internet connected cars? you'd think they'd learn from Nissan like two years ago?

https://jalopnik.com/how-the-nissan-leaf-can-be-hacked-via-w...


This is one reason why Tesla’s OTA update system scares me. As bad as hacking an alarm or some other secondary system is, imagine having your brakes hacked.


I see the point, but alternative also scares me. A software that never gets updated.


It could be updated using a file you download to a USB drive and then plug in to a port inside the car. That way it would require physical access to the car, and the update file could be signed to only work in your car specifically.

The only reason to do OTA updates is convienence.


Signing updates specifically would also still work with OTA delivery. Especially if the update can only be triggered from inside the car while simultaneously pressing some buttons. How the update is transferred should not matter for security.


OTA enables progress.

Assume there's a bug - safety critical bug. You cannot reasonably call people in house all at the same time, and continue to risk their lives as they "don't upgrade".

OTA updates also over time increases software quality, enables experimentation and slow/controlled roll out.

You are worried about short term problems over long term promise.


You cannot reasonably call people in house all at the same time, and continue to risk their lives as they "don't upgrade".

That's how it worked for all the cars before Tesla, so... Yes you can.


"thats how we did it before" is the number one inhibitor in the name of progress.

Remember your windows pre updates. Full of security holes and no way to patch, or bugs that linger and create problems that could never be fully eliminated.

Let me tell you something you didnt think of. Imagine i am doing my diligence before releasing a software but didnt fully factor in all the unknowns in the process. It happens, shit breaks all the time, right? Now imagine instead of uncontrollably calling everyone in house, i start 1% roll out, and gather data, find some problems with that investigate and push the real fix. See the efficiency gains? I prevented the chaos of 99% of cars, and i did somrthing data driven

Good luck on that without an ota.


It happens, shit breaks all the time, right?

When people are relying on software for their safety, no, it doesn't. Bugs in critical systems for things like planes and cars are rare because going faster kills people. Using 1% of your users as tests is fine if you're making a website but much less fine if your new code means they might die.


Real life disagrees with you. It is a simple math: roll out a safety update to everyone manually and have no idea on the real life performance, or roll out slowly and get the data.

Bringing cars in house makes no improvement over what i am proposing.


Most people wouldn't bother.


They would if it meant failing the yearly checkup, and thus making the car illegal to drive.


Anything safety critical should require a recall, and for everything else if the user isn't bothered that tells you a lot about how important updates to car software actually are.


I'd love to see statistics on how many injuries/deaths occur from recalled problems that users don't bother getting fixed.

I know I've had cars where certain defects (non-safety) were recalled, yet it took 2-3 weeks lead to get an appointment with the dealership, and several days in the shop once it was there (without a loaner vehicle). And I'm not talking full engine rebuilds either, just simple fixes. Most of the time I don't even bother anymore because it's such a hassle.


Not good thinking. See my response above.


In other words, like almost every ECU on the planet right now.

Why do my brakes need a software update? Is that not something that we as an industry can get right before shipping a car?


There's a fairly notable number of recalls that involve flashing new software to body control modules, ECMs, etc. Sometimes for dangerous problems around throttle control, etc. For example: https://repairpal.com/recall/14V583000


> In other words, like almost every ECU on the planet right now.

You're mistaken. Almost every ECU on the planet right now is flashable and they are indeed often updated as part of routine servicing, particularly on brand new models.


As others have pointed out the alternative to OTA isn’t never updating critical systems; you’re presenting a false dichotomy.


Not really. You'll have about 25% of vehicles that won't get updated, and you'll risk lives as recall's get remedied, over years.

You'll also not have any way to measure the impact of the update, chances are it is not perfect...

Data: https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13...


brakes are physical on a tesla


This where -- literally -- the rubber hits the road and we need extreme regulatory oversight over cybersecurity in cars. I don't like fearmongering but can you imagine what would happen if a terrorist group got hold of an exploit like this??


This issue is as much a market failure as it is a regulatory failure.

Here in the UK, the insurance industry collectively funds Thatcham Research, an independent body that assesses the safety, security and repairability of new cars. Thatcham's assessments are hugely important to motor manufacturers, because they directly influence the cost of insurance; a good rating from Thatcham means a low insurance group rating, which is an important selling point. It's a fantastic example of what happens when everyone's interests are aligned.

Thatcham also assess aftermarket security equipment; most insurers offer discounts on premiums where Thatcham-approved equipment is fitted. It is worth noting that neither of the products mentioned in this article are Thatcham approved.

https://www.thatcham.org/what-we-do/security/


Very nice. I always thought once you can get a real self driving car human driving will be driven out of existence by insurers.


Does regulation actually improves software security in practice?

I'm skeptical it does. I saw a few times how regulated software (certain billing systems) were certified and that was a bad joke. Maybe you have different experience, though.

If you want government bodies to spend taxpayers money on something, I'd rather suggest spending it on funding security researchers actively attacking systems and cooperating with manufacturers on fixing the discovered issues (and you can legally mandate such cooperation). This might work, actually improving end-user security. Although you'd have to somehow audit those researchers are actually doing something...


You can regulate the response time and the required infrastructure for the distribution of patches.

For manufacturers to actually listen to security research, you'd need regulation as well.

You could also require all or certain software in cars to be open source.


> Does regulation actually improves software security in practice?

Yes, it can. DO-178B is a widely used security standard in military equipment. It's difficult and expensive to obtain, and caters to fighter jets, not cell phones, but there is precedent for true technical security improvements through government programs.


Hang on, is this DO-178B _different_ from the DO-178B now replaced by DO-178C that is about _safety_ in aircraft systems? It isn't is it. So, you're talking about _safety_ when the parent and article are specifically about _security_.

Because confusing safety and security is exactly the kind of awful goof that we're talking about here. I'm sure these car alarms are _safe_ the problem is they don't keep your car _secure_.

The security record of military procurements is... not good. Same for the financial industry. Do a bad job, hope nobody finds out, if they do insist you didn't do a bad job and hope nobody who understands the difference is empowered to do anything about it.

Let's take something easy, communications. Your generic Android phone is capable of doing secure voice communications over a distance subject only to traffic analysis and other inevitabilities.

The British infantry have Bowman. At squad level it's unencrypted spread spectrum voice radio. So, much worse than that Android phone. A sophisticated bad guy (so, not some random bloke who decided to join ISIS last week, but say, a Russian armoured division you've been deployed to counter) can literally listen to everything you say, seamlessly, without giving away their position. Brilliant.

Now regulation _can_ improve things by mandating something that people who know what they're doing already recommend. But you're not going to get there with things like DO-178B.


> Hang on, is this DO-178B _different_ from the DO-178B now replaced by DO-178C that is about _safety_ in aircraft systems? It isn't is it. So, you're talking about _safety_ when the parent and article are specifically about _security_.

It's about both. The DO-178B/C standards require software to work as formally specified, and require robust branch testing to ensure the code conforms to the spec. This means different things for different applications, but in an operating system, for example, it means that no process can affect any other process if the two are deemed independent. This requires that the OS prevent all covert channels, strictly limit memory, CPU performance, etc. For example, a fork bomb wouldn't be able to affect other processes, and caches are wiped on every context switch.

So yes, it is definitely relevant to security as well as safety (which in military aerospace go hand-in-hand anyway).


It might be interesting to require a bug bounty program with high enough reward values. If companies want to indemnify themselves from the potential payouts, insurance companies could step in with their own standards and testing methods


I don't think there's much regulation for 3rd party alarms. You could regulate the OBD-II port I suppose. Some manufacturers isolate it from the more dangerous internal CAN bus, others don't. Though these high end alarms can be hooked into that too.


We will have to take off our shoes before entering ours cars?


I’ve seen DHS presentations on risks of hijacking networked cars. I can’t say they will effectively manage the threat, but they are absolutely talking about it.


So many ‘security’ companies making coding mistakes that there’s simply no excuse for.

How are these companies remaining in business? Call yourself unhackable and then don’t bother to even authenticate API requests... mind bogggles.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: