Hacker News new | past | comments | ask | show | jobs | submit login
Firmware is on shaky ground – let's see what it's made of (theregister.com)
157 points by Sindisil on April 17, 2023 | hide | past | favorite | 135 comments



As an EE I love the idea of open firmware. I wish more companies would provide it - akin to the old TVs and other equipment that came with [full!] schematics inside. It would let me truly understand and modify the items that I purchased.

There is definitely a cost to it to the companies, which I fully expect to be passed down to me, but not in the form of a license agreement - in the form of an increase in base price.

The problem for companies is multifold. A big one is that the firmware is the piece that not only interacts with but also protects the hardware. If you are easily able to change the firmware, you are easily able to destroy the hardware, and if that's under warranty, companies are going to be concerned. They're also going to worry about IP; certainly I build products that lean on the work from previous projects. I would kind of hate handing that over to potential competitors. But some of that 'hate' depends on the fact that my competitors don't give their firmware code out either. Maybe I would love it if I could see how they implement things. Maybe it would push us all to deliver better things. Another impact - open firmware would definitely change sales models for equipment.

I imagine that if I had to start delivering open firmware for designs, I'd need to push some of the software control over product limitations into hardware. That might cost more, but is better anyway. Usually. And I'd try hard to figure out a way to install a 'unverified firmware' hardware flag, maybe an efuse blown in a hard-to-replace component, so that we could know who broke things.

But I do like the idea. I want the firmware for my ${everything}.


What cost?

Competition? That isn't definite.

Support overhead? I don't buy it.

> If you are easily able to change the firmware, you are easily able to destroy the hardware, and if that's under warranty, companies are going to be concerned.

...so void the warrantee when flashing 3rd-party firmware.

How often, in reality, are people going to fry their hardware? It's not as if 99.99999% of 3rd-party firmware users are writing that firmware themselves! Hardware damage should be expected as an extreme edge case, not a broad looming risk.

---

If we are going to put this much effort into speculating cost, we should put equal effort into speculating value.

Open firmware is significantly likely to reduce the costs of compatibility and edge-case support. It is also likely to increase the value of the product by making it auditable and maintainable. It also factors out the cost of anti-user-maintenance efforts like DRM.

Most importantly, open firmware can stabilize the value of a product, increasing its resale price and delaying price decline. Unfortunately, this is the point that many companies consider negative, because they don't want to compete with themselves.


What do you mean that isn't definite? I'm very in favor of open firmware, but there have been multiple examples of clones popping up whenever firmware is public and open source.


Can hardware be protected (and defended in court) if copycats just copy a device wholesale?

If they construct a new device but reuse the firmware, it's sort is the point of having open firmware.


Open-source firmware wouldn't exactly be the same thing as public ___domain firmware. And even if a product's firmware isn't getting the benefit of copyright protection, the product as a whole can still be protected by patents and trademarks.


Exactly firmware shouldn't protect the hardware that should be done in hardware as firmware can go wrong.


> Most importantly, open firmware can stabilize the value of a product, increasing its resale price and delaying price decline. Unfortunately, this is the point that many companies consider negative, because they don't want to compete with themselves.

Can you share some examples of this?


Linksys got GPL'd when they released the WRT54G which spawned off OpenWRT, ddwrt, and friends. This blunder on their part sparked a boon of open source firmware development, which ultimately made the WRT54G very popular. Compatibility with open source firmware is a hard requirement for any new router purchases that I make


The GPL code release also only happened because of GPL enforcement, some of the history is written about here:

https://sfconservancy.org/copyleft-compliance/enforcement-st...


Several years back, I put Tomato firmware on an old WRT54G when all my old 802.11N devices were constantly crashing.

It was 100% worth the bandwidth downgrade. I practically never had to touch that router again.

I hope to never buy a router without DD-WRT (or equivalent free software) support again.


> A big one is that the firmware is the piece that not only interacts with but also protects the hardware. If you are easily able to change the firmware, you are easily able to destroy the hardware, and if that's under warranty, companies are going to be concerned.

Is that justifiable, though? I bricked a router some time ago messing around with ddwrt. I thought about soldering on a TTL serial adapter, to recover it, but didn't end up getting around to it, but never in my wildest dreams did I think of asking Netgear to replace the 8-year-old product I broke through my actions. I do know at least one person who is reckless with "no questions asked" warranties, and would ask for a refund with a straight face after trying to use a router to reduce spaghetti sauce spattering when he microwaved his dinner, but these people can't be that common...

One area where it does seem slightly more justifiable is FCC-certified radio devices. If the transmitter power level is restricted by law, I'd prefer that end users/modifiers of the firmware be considered legally responsible for the consequences of their own actions, but I understand that pragmatically it's a lot easier to ask OEMs to lock down firmware after getting certified in a test lab than to monitor a million end users.


>I do know at least one person who is reckless with "no questions asked" warranties, and would ask for a refund with a straight face after trying to use a router to reduce spaghetti sauce spattering when he microwaved his dinner, but these people can't be that common...

There's common, and then there's common-enough-to-be-costly. REI had a notorious lifetime-return policy that they ended relatively recently because of abuse. How common was the abuse? No idea. I don't know many people who would Return Every Item (as the joke went), but it was common enough that there was always some really beat-up climbing shoes at their member garage sales.

And anyway, there's (at least) 2 kinds of costly: cost of returns, and cost to reputation when unqualified people brick their device, then tell all their friends that their router/refrigerator/laptop stopped working.


> I don't know many people who would Return Every Item (as the joke went), but it was common enough that there was always some really beat-up climbing shoes at their member garage sales.

I think you chose an idiosyncratic case here. There must be a significant number of their customers who go there to get fully outfitted for a single, relatively short trip.

If you depend only on shame to keep most people below the age of, say, 30 from returning a rent check's worth of camping gear after a single use, well... as you stated REI no longer allows that. (IIRC that happened shortly after the 2008 downturn).


It happened at least after 2013, because I went skiing with a guy who made it a point to return everything he'd ever bought from REI because of a court case he read about. It was a sort of super-boycott.

I chose an extreme case, because the cost is clear, and the consequences are very visible. Obviously, a company's peculiar financial situation will determine the acceptable return-and-refund/replace rate. Every company will have a financially acceptable return rate, and will look for easy fixes to keep their actual rate below that. Barring replace/refund for user modifications is a way to lower the refund/replace rate, without actually decreasing the number of items that get returned. There's still a cost to inspecting devices for user modifications, and that cost may in turn lower the acceptable refund/replace rate, but it does mean the product development people have to spend fewer resources eliminating potential sources of failure

None of this is to say that I think firmware shouldn't be field editable. Just that barring those sorts of things is low-hanging fruit and (in general) pretty easy to sniff out if the device is just bricked, not permanently damaged. I think manufactures could instead enable firmware updating after cutting a particular trace, to the same effect. Thus, the would-be hacker must knowingly physically damage the product in a way that clearly voids the warranty, and then they're free to modify to their heart's content.


I was messing around with an I2C controlled lithium battery charger on a raspi last night (long irrelevant story) and found out it does NOT support "fast charge" (probably not enough thermal monitoring?) but it does support larger capacity batteries that coincidentally permit higher currents. So the short version is I can charge a battery twice as fast by lying to the charge controller that it's twice as large. On average, probably 99% of people can get away with that, the problem is 1% set the battery on fire or otherwise burn out the battery.

No matter how smart you make the controller you can't outsmart the user; to the battery charge controller, a 1 aH battery being treated as a 2 aH battery behaves like a 2 aH battery that's at 50% capacity due to age or whatever issues.

I would imagine, as long as I can keep it cool, I could install a 50 mAH micro battery and tell the charge controller its a 2 aH battery and it would charge very fast indeed. Perhaps only once, but it would be very fast. I suppose the worst case scenario is some kind of virus/cyberattack reprograms the FW to believe the battery is either 65536 mAH or 1 mAH, either way the battery would appear "dead" to end users.

Another common problem is marketing and mgmt may be a LITTLE over optimistic about a feature; imagine if your IoT cigar humidor (made up idea; probably does exist LOL) has a hardware barometric sensor on the I2C. I2C is famous for dodgy hardware being able to 'jam' the bus. Ah no problem nobody needs a baro on a humidor anyway, we'll just delete it from the marketing materials and remove it from the firmware, no need to e-cycle otherwise good first batch of boards. Well if someone uploads custom firmware and the baro is polled every hour and it randomly jams the I2C once every hundred polls, "must be a thermal issue the CPU crashes" nobody might ever understand the problem. I mean, its gotta be a hardware bug worthy of a return, everyone knows if the code compiles and passes unit tests and works for a couple minutes, it must be good, right? But if the I2C protocol is interrupted due to a wifi interrupt in the middle of whatever it crashes the whole bus so randomly every couple weeks it locks up.

Some big expenses are not brick fails but "it burst into flames while on an aircraft" or "everyone knows the hardware locks up randomly every couple weeks" causing all kinds of crazy bad PR.

Then there's interference with marketing models. Well, technically the hardware for rev1 and rev2 are the same, its just rev2 has more features because we eradicated bugs, so please pay us again, don't just download new FW.


Nit: it's Ah (A•h) and not aH, because Ampere (the A) is from Henri Ampere, and the h is just the inanimate hour.

A proper SI interpretation of aH would be atto-Henry, where Henry is the unit of inductance (from James Henry), and atto is 1e-18.


> If you are easily able to change the firmware, you are easily able to destroy the hardware, and if that's under warranty, companies are going to be concerned.

I don't think this reasoning makes any sense. The company can just declare that replacing or altering the firmware voids the warranty.

> I would kind of hate handing that over to potential competitors.

But you do that the instant that your product ships. I've shipped a lot of firmware in products, and it's very common to find my firmware reverse engineered and available within a month or so of the product being released.


> I don't think this reasoning makes any sense. The company can just declare that replacing or altering the firmware voids the warranty.

It's still a cost, though. People are still going to file tickets for warranty replacements, it won't be until the company receives the broken item that they'll detect the firmware replacement (if they even can, what if the hardware damage broke the ability to check?), people return items to the store they bought them from where they can't even check...

It's honestly a whole mess that will absolutely wind up costing the company money in support and returns. You can argue that it's still worth the cost, but just declaring that altering the firmware voids the warranty doesn't stop people from trying, which costs money.


Yes, it is a cost. I would argue that it's a reasonable cost of doing business, personally, but it is a cost.


>The company can just declare that replacing or altering the firmware voids the warranty

How can they know/figure it out? (They can't/or it's prohibitively expensive/difficult).

>it's very common to find my firmware reverse engineered and available within a month or so of the product being released.

Sure, but it's hidden from 99.99% of your user base


> How can they know/figure it out?

Sony's answer for Xperia phones used to be that you'd unlock the bootloader with a key you got from them by entering the serial number/IMEI. As part of that, they warn that using custom ROMs that damage the phone - by overclocking, for example - would void the warranty.

It's a reasonable compromise, in my opinion.


Do they still do this?


Lenovo does


> How can they know/figure it out?

That's not a difficult problem, technically. There are a number of ways it can be done. Anything from comparing checksums to eFuses.

> Sure, but it's hidden from 99.99% of your user base

Yes. My point is that avoiding open sourcing firmware because of the risk of piracy or reverse-engineering doesn't make sense because pirates and/or interested engineers will do it directly from the devices anyway.


>pirates and/or interested engineers will do it directly from the devices anyway

But they don't need to make it too easy by giving the source on a silver platter.


Prusa (makers of the Prusa 3d printers) set it up so that you had to physically break part of your printer before you could flash custom firmware.


You could also just blow a fuse; we do actually have ways to do write-once memory for this kind of thing


Could you describe a rough implementation, with the assumption that the vendor can still provide firmware updates?


Sure. The vendor firmware contains an option in the settings, "unlock flashing". Enabling this (after a confirmation) will blow the "user modified fuse". When the system boots, the bootloader checks that fuse; if it's blown, then the bootloader will boot any firmware. If not, only boot images signed with the vendor's public key are allowed to boot. When the vendor gets a warranty claim, they check if that fuse is blown; if the fuse is blown and the damage could be software induced, they take that into account (IANAL, but they might not be able to immediately legally refuse it, but they can at least ask pointed questions about what exactly happened; YMMV).

Disclaimer: I didn't come up with this, I'm just summarizing approximately how I understand some Android devices to already work. This also makes me view this system as somewhat battle tested.


You have it in any unlockable Android phone.

In short - write the manufacturer public key in ROM and a fuse selects if signature check is enforce.


You can't read a blown fuse when the entire MCU is toast, though.


> If you are easily able to change the firmware, you are easily able to destroy the hardware,

For a recent and popular example of this, look at Hector Martin's experiments in implementing sound for Asahi Linux [1], which led to the destruction of his Macbook speakers and a warranty replacement.

(the end result is a good Linux sound driver for that hardware that correctly implements power limiting [2])

[1] Account since deleted but used to be at https://twitter.com/marcan42/status/1569595321168334848 if anyone knows an archive

[2] https://mas.to/@[email protected]/110122332896112809


>I wish more companies would provide it - akin to the old TVs and other equipment that came with [full!] schematics inside. It would let me truly understand and modify the items that I purchased.

I had forgotten about that thing with the wiring diagrams; I loved that.

That was the only thing that enabled me to repair a 1940s pinball machine that was essentially a relay-driven mechanical computer. Without those prints the machine would have surely been a loss due to the insane complexity and lack of interested/trained people that know how to work on such devices.

One wonders now how many lost pieces of equipment that could help to save from the recycling bin nowadays.


I imagine there's upstream issues too, with NDA's for hardware interfaces, and obviously proprietary OS like VxWorks, MS ThreadX, etc.


I was about to say that a lot of this is completely ignoring the few network vendors/chip vendors that have completely locked down hardware/chips that you're buying from them and effectively acting like a value added reseller for companies like broadcom.


> If you are easily able to change the firmware, you are easily able to destroy the hardware

How could custom software destroy a phone, a computer, a TV, a printer?

> if that's under warranty

Just void it.


A lot of consumer electronics, like phones and laptops, are powered via USB-C these days. It is capable of supplying a wide range of voltages, and the device will negotiate a voltage with the charger. This requires a PD controller on both sides - which is configured via firmware. Imagine your device was designed to charge at 9V, but custom firmware makes it request 48V instead.

Display panels these days use firmware to drive the panel itself. Instead of a pixel simply being on or off, it is driven with a voltage waveform which specifies how long which voltage should be applied and in what order. This allows things like "overdrive" to reach a faster response time. I would not be surprised if a corrupt waveform would cause physical damage, as the pixel would be driven way beyond its design specifications.

An inkjet printer could have the print head move all the way to one side and keep driving the motor. If the motor driver does not have proper temperature protection, it could result in the printer catching fire. A laser printer could have the firmware turn on the fuser's heating element until it catches fire.


> How could custom software destroy a phone, a computer, a TV, a printer?

The problem with your question is that you replaced the word "firmware" with the word "software" when the distinction between the two is the answer to the question.


I don't see the problem at all. Firmware is software.

I can see how defective software could damage industrial machinery but I'm having trouble imagining how some TV could possibly be damaged by software. Many consumer devices don't even have moving parts.


It is very tempting (and very easy) to think about firmware as simply software that you can update willy-nilly and at will, and things will always go back when things go bad. But there is a reason that most embedded firmware engineers are on their org's hardware team, and not software team. If I push a change that requires a JTAG to reverse, then from the customer's point of view, it is damaged.

The bootloader that allows you to revert back to prior firmware, is itself firmware. If that gets broken by a firmware update, then your device is effectively damaged.


> Firmware is software. [...] but I'm having trouble imagining how some TV could possibly be damaged by software.

It's not very hard to imagine. For instance, most embedded chips have several "general purpose" I/O (GPIO) pins, which can be configured as an input or as an output; their usage depends on how the chip was wired into the circuit, and very often, they're shared with "alternate functions" like a serial bus. Configure them incorrectly (as an output when they should be an input, for instance), and you can easily create a short circuit, burning that pin or even the whole chip.


Is that particular risk common? I would generally hope that I/O pins put out few enough milliwatts that they can be shorted safely.


Yep! Most microcontrollers will sustain permanent damage if you short an output pin for any significant time. You could probably design a microcontroller that didn't do this, but it would require putting a big resistor in line with the pin (decreasing responsiveness and max power output) and/or adding a lot of circuitry and increased cost to defend against a risk that's not a big deal when the chip is being programmed by a competent embedded engineering team.


A lot of circuitry? I would have thought a handful of (integrated) transistors could make it happen.


Transistors that can handle a lot of current or voltage will need to be much larger than most of the logic transistors on a chip, so even though they may appear to be few in number on the schematic, they can add up to significant die area for a small part like a microcontroller.


They need to control the transistor that actually does the output, so they shouldn't need to be very big themselves.


This is a layer below the digital circuit abstraction in the territory of analog circuits and device physics.

The extra current is a physical thing and you need more material to temporarily withstand it, and more circuit to detect and control it. Since it's an uncontrolled switching event, it'll probably ring unless you add even more components to absorb and control that. Then that could exceed the physical limits and trigger a parasitic circuit that doesn't have an off switch, so you need yet another circuit to detect and shut it off somewhere else.

It can be a lot of work for the board designer to make it reliable and compatible, assuming the other chip it's talking to can also handle the extra current. It's cheaper and more reliable to type GPIO1DIR=OUT or whatever. Sort of like when you drive a car, it's easier to choose to drive in the correct lane than it is for the car to enforce it on you and protect you if you do it anyway.


> The extra current is a physical thing and you need more material to temporarily withstand it, and more circuit to detect and control it.

Supposedly most of the chips can already temporarily withstand extra current. But the point of a current-limiting circuit is that you don't have extra.

> uncontrolled switching event

I'm not suggesting turning it off entirely, unless that's much much easier.

> it'll probably ring unless you add even more components to absorb and control that

If it fluctuates some when overloaded, that still sounds better than frying itself. But I'd expect an integrated implementation to keep pretty tight bounds.

> that could exceed the physical limits and trigger a parasitic circuit

What physical limits? You've lost me at this point.

> It can be a lot of work for the board designer

I was suggesting building it into the chip.


The current limiter is analog and dissipates a lot of heat compared to digital and also uses more passive devices, so it must be bigger.

Digital CMOS is triggered to switch between fully on and fully off. You can't really hold it in between. If you do, you get undefined behavior.

The ringing can have an initial spike that fries stuff.

CMOS can break down and the current will flow through a different path away from the gate where the gate can't turn it off. Called latch-up.

It is in the chip. The protection circuit can add a lot of parasitic elements to the pin interface that you have to account for when you design the board.


> The current limiter is analog and dissipates a lot of heat compared to digital and also uses more passive devices, so it must be bigger.

Controlling the voltage put into the output transistor shouldn't use much power or output much heat, should it? The output transistor will heat up based on voltage loss, but it needs to be able to handle a notable amount of that even when it's not shorted.

> Digital CMOS is triggered to switch between fully on and fully off. You can't really hold it in between. If you do, you get undefined behavior.

The pins are already tri-state. The logic to output +V, or output 0V, or neither already exists. So it won't fight itself.

> The ringing can have an initial spike that fries stuff.

How can you make a transistor's output spike higher than it does with the existing digital drive method?


Turning off the transistor is only the last step and the previous steps like detection take up space. Digital CMOS is bi-state and the pin is tri-state, therefore you can conclude that there are additional components involved to achieve the third state. Spiking can be caused by suddenly shutting off current through a parasitic inductance because it sort of has inertia and can't stop immediately.


> the previous steps like detection take up space

Yes but I'm missing why they would need significant amounts of space or power compared to the big transistor that's actually dealing with the current.

> Digital CMOS is bi-state and the pin is tri-state, therefore you can conclude that there are additional components involved to achieve the third state.

Yeah, so less to add and less to worry about compensating for because it's already handled.

> Spiking can be caused by suddenly shutting off current through a parasitic inductance because it sort of has inertia and can't stop immediately.

It already abruptly turns on and off. How does an extra trigger condition make that worse?

Or in other words, how are we not already in the worst case, with nowhere to go but up? (Since if we're just controlling the transistor better we won't be adding any more inductance than the pin already has.)


It depends on the design, but think of it this way. Digital if the smallest you can go. The protection circuit is not strictly digital, therefore it is bigger.

It's not already handled because you still need a circuit that detects the condition and switches to tri-state, if that's even how it's implemented.

Ringing and spikes come from electrical mismatch. If the protection changes the electrical properties of the pin, it may have to do more work to damp out the new mismatch. "Abrupt" isn't a single thing with a universal solution.

We're not just controlling transistors, but also sensing, shunting, clamping, damping, etc. And we're starting from the best case so we have nowhere to go but down.

You'll have to look up the rest yourself.


I know it's "bigger". But the protection circuit should be working on a thousandth the power as the output transistor, and the chip has a zillion logic transistors already, so I'm saying the chip should be negligibly bigger.

It should always be tri state. Never allow the positive and negative output transistors to get power at the same time. If that particular detail wasn't already implemented, it'll take like two logic gates more. Which is absolutely nothing compared to the rest of the chip.

And again, don't change the electrical properties! Tap like a microamp for monitoring, on a pin that outputs milliamps.

It doesn't matter that there is no universal solution to "abrupt" because we already have an acceptable setup and it's not changing.

Sensing can be done with no real impact on output characteristics. Additional shunting and clamping is not necessary. If the damping only happens by controlling the output transitor, then it's no different from how the circuit already works.

And no we're not starting in the best case. We're starting with a transistor where the design goal was to have as fast a slew as feasible. If it already doesn't overshoot dangerously, then using the same or slower slew shouldn't be hard to avoid overshoot, all else equal.

Most of your objections come down to "if you change X you might cause problems" when I'm saying not to change X.


You're describing magic and contradicting yourself. Tri-state means everything is off, so one transistor can't also be on at the same time. Damping dissipates heat, so damping by controlling the output transistor requires a bigger output transistor.

Down below the digital level, you can't isolate decisions from each other and nothing is free.


> tri-state means everything is off, so one transistor can't also be on at the same time.

I'm using tri-state to mean it has three distinct states. The output transistors are not sharing a control wire to make them one on, one off. If that's wrong use, I'm sorry.

The point I'm making is that it's easy to make it so the output transistors don't fight, even if the one that's enabled gets a halfsies voltage, because the other one won't also get a halfsies voltage, it will get a pure digital off.

> Damping dissipates heat, so damping by controlling the output transistor requires a bigger output transistor. Damping the output transistor also changes the electrical characteristics of the pin.

I'm assuming it's already kind of heat resistant because it only sometimes fails when shorted with no limiter at all. And if you damp it enough you won't make much heat. But fine, let's ignore that. You already brought up just turning it off at a certain point. If that's what's needed, so be it, because that won't change the characteristics.

No magic needed to keep the characteristics the same if you're just turning it off the way it normally turns off.

But I still don't understand how a transistor with an input that damps it is supposed to cause voltage spikes in excess of the same transistor with an input that doesn't damp it and always switches with maximum aggression, with only the control logic changing.


At this level, we're creating the concept of a digital state, so we can't use that result to solve problems, it's circular reasoning.

If damping makes heat, how can damping it enough make less heat? Why are you assuming that it's heat resistant if this is the level where you design the amount of heat resistance it has? Who did that work? Nobody, you have to do it yourself.

There is no control logic. It's analog. Logic is digital. Digital doesn't exist until we're done.

Sorry, but I've done all I can. Good luck in your search.


This is naive. There are tons of firmware changes that will render hardware unusable until the board is re-flashed (which you may not be able to do at that point), and even without moving parts the capability to damage components. Simple example, often firmware is the only thing that will stop your hardware from cooking itself if it is capable of doing so. Or if you mess with battery management you could conceivably get a fire. Or you mess with power distribution and send the wrong voltages, parts are fried. Or, etc. etc.


You're really not trying hard to think of any possible answers, are you?

In most devices over a certain level of complexity, there's firmware involved in thermal management. Messing that up can easily lead to premature failure of the hardware, and in some cases very quick failure.

Even for a TV which may not have much of a thermal concern for its processor (though I wouldn't bet on it with today's smart TVs), you can still expect there to be some screen burn-in mitigation.


> You're really not trying hard to think of any possible answers, are you?

I asked a question because I don't know. I assumed you did. I'm trying to learn something.


You're being passive-aggressively unimaginative and unwilling to seek out information for yourself, in a manner that is a common form of trolling. Even if trolling is not your intent, you should know that the idea of learning by asserting something wrong online and waiting to be corrected may make for a good joke but is not a polite and productive way to have online conversations.


I'm not being "passive-aggressively" anything. I asked a simple question. You're the one who started some firmware/software semantics argument instead of actually answering what I asked. I didn't assert anything either, other than that firmware is software. I continue to assert that now.

If you think I'm trolling, then simply don't reply. I've learned plenty from the other posts. I'd rather not create hardship for dang by continuing this thread.


Here's one moving part: speakers. It's becoming more common in the chase for good audio performance from small speakers to have a driver which can easily burn out the speakers, when given the wrong audio signal on normal volume. The firmware needs to actively model the power dissipation and temperature in the speaker and limit the signal if it could damage the speaker.


Just tell the voltage regulator to output 12v instead of 3.3.


> I'm having trouble imagining how some TV could possibly be damaged by software

Modern OLEDs perform image manipulation to avoid burn-in. Your XDA Developers firmware is going to end up with your screen being garbage in a year.

Android firmware routinely allows processor overclocking, a great way to damage components in devices that have tight thermal requirements.

Those are two. Arguing that firmware can't damage the hardware that it controls shows a decided lack of imagination.


One example is flash memory which can only do a certain number of writes (in the order of 10,000-100,000 writes).

You could have a firmware that saves everything to flash all the time and eventually causes the flash to become defective. It'd be easy to burn a flash out in less than a day.


There was an early 8-bit home computer where you could reprogram the video controller chip to drive the TV/display out of its operating range (timings/video frequencies, etc.) where left too long (how long? I don't know---I'm a software engineer, not hardware or electrical) could damage the TV/display.

Go back further, and you get to harddrive races, back when harddrives were washing machine sized devices, and if you get the heads going back and forth fast enough, they start shaking and could "walk" across the room. That doesn't sound like it's good for the harddrives.


You could definitely brick it, in addition to all the stuff everyone else mentioned.


The firmware is what stops you frying components. A good example is that MacOS has software which prevents the speakers from being over driven resulting in physical damage. There are other problematic things like overwriting flash memory which might result in the device not functioning properly.

Most of the time it would be extremely difficult to work out if the user caused the damage or if there was a bug or manufacturing fault. So they just give you a replacement anyway. It would be possible to create extra checks and void flags to try to detect this but that's just extra cost to the product for a feature almost no one cares about.


Another example: Your CPU has software that throttles its performance as it heats up, and eventually shuts it down. Remove that safety with custom firmware and you can make it overheat. Same thing goes for hard drives, SSDs, network interface cards, and so on.


I'm not sure vendors can void warranties in all countries. If you make a product that advertises itself supporting custom firmwares, then your hardware should be capabale of doing so would likely be how it is viewed in my country.


Just one example: the firmware for some piece of hardware could control the DC biasing of the device. Set the bias incorrectly and now you're drawing more current than is safe for the hardware and drastically reduce the lifetime.


What is EE?


Electrical Engineer, most likely.


Got it, thanks.


> But then, firmware is weirder than we give it credit for. It's even hard to say exactly what it is. That used to be easy – firmware was software built into hardware (don't mention microcode.)

Maybe this is just my bias as a low-level MCU programmer, but I wonder if this isn't a definition problem more than anything else. To my mind, if your code has problems like this:

> We notice the old devices piling up in a desk drawer, hardware perfectly fine but with ancient firmware that just won't play with modern services.

Then it's not firmware, it's full-fledged software, and ought to be treated as such. Calling a smartphone OS "firmware" is particularly odd to me -- it runs on a general-purposes computer! It takes up gigabytes of storage space! It updates itself over an internet connection that it also manages! -- and I think it gives the wrong idea about the system it's installed on and the nature of the software itself. In particular, anything that needs regular updates is not "firm" in any sense.

It is hard to draw a firm line somewhere between the code in a tiny microcontroller running a battery charger and the operating system running on a general-purpose application processor. Motherboard BIOSes are a bit of a marginal case. But I think there is a useful distinction between "acts like part of the hardware and never needs to be changed" vs. "is the core of the product and when it stops being updated the product quickly becomes useless". Very few people are clamoring to hack on the firmware for their PC's power supply or their car's air conditioner.


The difference between "software" and "firmware" is mostly relative to your view point. It's another level below the stuff you work on yourself.

For example many modem modules these days have fairly powerful ARM SoCs running Linux and they also have another smaller processor running a RTOS or bare metal that does the radio stack. The whole module could be integrated into a larger device itself running Linux too (such as a payment terminal).

The terminal owner will probably consider the entire terminal software as "firmware" (since it's in a device and pretty opaque to them and they likely can't update just some applications like on their PC).

The developpers working on the code in the terminal will probably call what they do software and consider the stuff that goes in the modem module to be "firware" (and probably not differentiated between the Linux based and RTOS based parts of the modem).

The developpers working on the Linux part of the modem will probably consider what they do to be software and the firmware to be the RTOS based stuff that runs on the baseband processor.


All firmware is software but not all software is firmware.


I don't know if you're right or wrong, or if there even is such a dichotomy, but I struggle with the same thing with one of our products. It's basically Linux running on an ARM chip, with our stuff on top. We call if "firmware" for what are probably reasons of habit, because all of our other products actually do use what I would call firmware. And when we hire for the team, we specify "embedded programmers". But I dunno, this board (single-core though it might be) would have suited me just fine as a desktop box twenty years ago. It runs a full OS, it runs a web server for the UI, services in the background; it's a reachable server in a teeny-tiny box if you ask me. Writing code for it is much like writing any other Linux app. Is it really "firmware"? Do we really need "embedded programmers"?

I would argue "no" on both points.


I think any firmware that's complicated enough that the manufacturer gets it wrong should be open-source. Especially if the manufacturer is deliberately crippling the product with bad firmware rather than just accidentally through incompetence. And firmware that's too simple to screw over the user is probably too simple to be eligible for copyright protection to begin with.

A motherboard BIOS is not at all a marginal case. BIOS bugs abound, as can be seen by booting Linux on almost any PC and looking through the kernel logs for ACPI table errors and various other workarounds and quirks being activated.


I think you're talking past the parent's point here.

Firmware as I understand it (also as a microcontroller firmware programmer) is about the same as what the parent understands.

I think your metric about "too simple to screw over the user" is sort of weird, when in the context of my work and the parent's work, "screw over the user" might well mean "disabled the oxygen pump on the user's space suit".

It is in that sense that motherboard Bios is a marginal case. It is marginal in the sense that BIOSes are clearly at the margin between embedded microcontroller firmware and full-blown general-purpose-computer operating system. If I have to update the BIOS on my computer every month (or every week) just for it to boot into any operating system, then something is incredibly wrong, even as modern BIOS have become several orders of magnitude more complex today.


Firmware problems don't have to be severe enough to put human lives at risk to be a real problem. There are tons of examples of bad firmware leading to broken power or thermal management leading to crippled performance or battery life or excessive fan noise. WiFi NICs have subsumed large parts of the network driver stack and in doing so have made it impossible to implement effective QoS, leaving users stuck with stupid radio behaviors that hurt the performance of every device operating on the same channel.

None of what I'm proposing would lead to a bios update every month, except in the initial period of fixing the worst of the manufacturer's mistakes. "Firmware" as I'm using it would still be trying to present a stable interface to the rest of the system and not inherently be a moving target of constant feature creep in addition to the bug fixes.


By "marginal", I was thinking more of whether a BIOS is possible to ignore. The vast majority of PC owners never update their BIOS at all. If you buy a motherboard a couple years after its release, there may not be any significant updates from the manufacturer. It's not like the OS where you're downloading updates multiple times a month.

This is separate from the question of whether the BIOS should be open source. I'm inclined to think that it should, especially after the motherboard has been out for a few years.


> The vast majority of PC owners never update their BIOS at all.

I wonder if this is true now that Windows will do it automatically. I have been more than once surprised to see a computer updating its firmware after a Windows reboot.


Pretty much all laptops are getting firmware updates delivered through Windows Update (or a vendor-specific automatic tool), and usually for several components rather than just the motherboard UEFI firmware. And they definitely need those fixes, because the annual cadence of product refreshes means they're shipping from the factory half-baked


The whole "never needs to be changed" thing isn't really useful, though.

I have updated the firmware on my mouse, webcam, docking station, SSD - even a light bulb! We are rapidly reaching a point where the firmware on all but the most trivial devices can be updated, and often has to be in order to fix bugs.


Personally, I would call most of that a horrifying failure of engineering, but I’m a bit of a curmudgeon about “smart” devices.


In a past life I was an embedded designer, and I think your observation is spot on. While the libre enthusiast in me would love to be able to inspect and possibly modify bona fide firmware, I do respect that say tweaking the code on a power supply or motor controller is fraught with hazards (you want to use a debugger? while the microcontroller is stopped, your circuit cooks. you want to use printf to a serial port? the additional cycles make your circuit cook ...)

But there are tons of companies abusing the term for software that is actually being continually updated, and/or isn't hardware critical at all (eg Intel ME). The use of "firm" here is entirely prescriptive, for the manufacturer's business desires to lockout end-users, rather than because the code is akin to fixed hardware.

A simple test might actually just be the desire to modify/repair. Nobody wants to really dig into a power supply controller unless it's doing something very broken that needs be fixed. Meanwhile what creates the most e-waste is the ending of software updates, which are most certainly not firmware.


I think we should drop the word firmware. Firmware is just software.

And to be honest, the distinction has never been so much a technical one but more of a ownership one. Firmware is really software that you take for granted and “software” is the software on top that you can maybe configure yourself but that line is so blurry.

Firmware vs software has more meaning when applying for jobs but it kind of comes down to an indication of culture more than technology.


I disagree. The distinction is very valuable when it comes to how you design the system for maintenance.

Firmware is not simply software that you take for granted, its software that is designed and tested prior to release such that updates and maintenance are very rare. Its possible that updating by the user is impossible. Example: I worked on a swarm of environmental sensors that reported via bluetooth. In principle, you could send out a new firmware package over the air, but rewriting their ROM required more current the the coin cell could provide. These devices were meant to be in hard-to-reach places, so updating the firmware on a fleet of them (a few thousand in an average warehouse) was a pretty onerous task, involving recovery, disassembly, and reprogramming using specialized equipment. When we pushed out generation 2 of these devices, we addressed the issue with over-sourcing current, but it was still understood that there was some non-zero possibility that an update could brick the device, necessitating the onerous process to recover. The takeaway: if it really hurts to update, its firmware.

I do get irritated that I have to read quite a ways down a job description before they indicate something like "embedded Linux/Windows/AmigaOS/etc", because when I call myself a firmware developer, I think embedded C on a mirocontroller with at most a simple RTOS.


That is a good point that I didn't touch on in my comment. Although, oddly, a lot more software used to work that way before the internet. Console video games in particular went from "there might be different versions of the cartridge ROM or disc but you won't ever know" to "download the day one patch if you want the game to work and download the next two patches if you want it to work well" practically overnight once they got internet connectivity.

Maybe the real distinction today should be between internet-connected systems and non-internet-connected systems.


> In particular, anything that needs regular updates is not "firm" in any sense.

It's really not about the updates that makes something "firm". Hardware is hard because it's a real physical thing. Software is soft because it's a non-physical thing, a set of instructions. Firmware is firm because it's less physical than hardware, and is more physical than a set of instructions. Firmware is software, in that it's a set of instructions, but additionally it needs to be loaded or flashed or programmed into the hardware, and stored either on-chip or in some ROM or NVRAM nearby, differentiating firmware storage from software storage on disk, tape, or some other peripheral storage. At the time, this made pretty clear sense, but over time, things were made murky by multifacted uses of NVRAM and peripheral storage.

So, "software built into hardware" is a pretty good definition. When you say something isn't firmware, it's software, that seems mistaken. All firmware is software, not all software is firmware. The size doesn't matter. Whether it's an application or a device driver doesn't matter. What defines the "firm" part of firmware is whether it's "built into the hardware". That's it.

So, yeah, you're having a definition problem. If you keep whatever definition that you currently have, then you're gonna have a bad time. If you try to "draw a firm line somewhere between the code in a tiny microcontroller running a battery charger and the operating system running on a general-purpose application processor", with this new definition, the question becomes "where is this code stored?". It doesn't matter how large it is, whether it's 100 lines of code in your battery charger or 1,000,000 LOC for your OS. If it's in on-board storage, it's firmware. If it's in peripheral storage, it's software.


How would you classify the operating system on a Macbook? The entire SSD is just a bunch of flash chips soldered to the motherboard - just like the BIOS flash chip.


I think firmware has a distinction that it isn't updated as much as software, and will be a read only file system.


Good article, but some points I disagree with.

> but if you go anywhere except to the manufacturer when you update a motherboard, you deserve to be busted down to abacus operator.

Well, good luck finding drivers and firmware. Realtek used to be real bad, Compal used to pull their archives for "discontinued models" the day the devices were EOL. Microsoft thankfully forced OEMs selling Windows PCs/laptops to provide Windows Update integration for drivers because the situation got out of hand, but for accessories it's still the Wild West and it's very difficult to find archives for stuff that's been discontinued or where the vendor went through half a dozen worth of mergers.

Not to mention SEO scammers hijacking "<manufacturer> drivers" search terms to a degree that they get the first Google listing and then use dark patterns to get people to download malware. We can't expect users to be able to detect SEO spam, not in times where criminals clone entire newspaper or bank sites to pull off extremely convincing scams.

> Companies like using firmware to lock down their devices to business models – even when, as Sonos discovered, those models can provoke customer rebellion.

For some things, particularly anything involving RF communications, there are legal requirements to not let people access chips in a way that allows them to manipulate the signal, e.g. by using them as SDRs, using frequency bands not allowed, or using too much power for the amplifiers.

In other cases, they're forced to do so because they wouldn't get content... Netflix and other streaming apps are really bad here, it's a constant hassle to get it running on rooted Android devices.

And apps like in banking enforce un-modified firmware to limit legal exposure when people get hacked. It doesn't make sense because the risk model is just the same as online banking on a PC, but here we are... (conveniently ignoring the bullshit South Korean banks pull based on long-outdated laws).


An oldie, but a goodie:

> Hardware met Software on the road to Changtse. Software said: ``You are Yin and I am Yang. If we travel together we will become famous and earn vast sums of money.'' And so they set forth together, thinking to conquer the world.

> Presently they met Firmware, who was dressed in tattered rags and hobbled along propped on a thorny stick. Firmware said to them: ``The Tao lies beyond Yin and Yang. It is silent and still as a pool of water. It does not seek fame, therefore nobody knows its presence. It does not seek fortune, for it is complete within itself. It exists beyond space and time.''

> Software and Hardware, ashamed, returned to their homes.

...from The Tao of Programming [0]

[0] http://canonical.org/~kragen/tao-of-programming.html


I understand most of the concerns by hardware manufacturers for not publishing their firmware source code, however if they did that after the product is declared obsolete, or at least if they would unlock bootloaders and publish enough documentation so that new firmware could be developed, that would give the community a way to recycle old hardware without them surrendering most of their precious IP.


Yes, unlocking bootloaders the moment you stop shipping updates would be an absolute no-brainer to require imo. I've so many older Android phones floating around that would still make for great backup phones or dedicated usage for stuff you don't want on your main phone, but they're stuck with horribly out of date Android versions.


I agree with the principle, but I think your notion of "obsolete" doesn't really match the incremental nature of hardware/software development. When does an iphone go "obsolete"? Apple re-uses pieces, giving them minor tweaks along the way, and stops making and even supporting the old ones after a while. But the code in a phone I would call obsolete will still live on in a current phone. Companies worried about their precious IP will still have a valid excuse to not release the source.


The suggestion that an individual or a community could actually build, let alone maintain, any of these OS distributions for older hardware is disingenuous.

There is literally no chance whatsoever that, even with complete access to all of the sources, any individual or community organisation could build an iOS distribution; not even once, let alone on any sort of cadence that would make the effort worthwhile. The required build infrastructure is massive, and maintained by a dedicated priesthood whose existence and experience are an integral part of the machinery.

Let's not even get into testing, or carrier qualification, or support, or ...

This is well known, and not a subject for meaningful debate. Any writer claiming that "just releasing the source" would result in any of the claimed benefits is lying in service of some other objective (or delusional, but probably just lying).


I can't help but to disagree. Just look at Linux Distributions for an example proving you wrong, let alone things like TWRP; a recovery partition that with relatively little work builds and works on most any phone that will take it; or any of the big Android Distributions that deal with hundreds of phone models. A custom build of iOS would be nothing by comparison.


You are displaying your ignorance. I ran an AOSP build customized by volunteer enthusiasts on a smartphone that originally shipped with Windows Mobile on it. With zero issues. For _years_. The HTC Vogue lasted me from Windows Mobile through Android Cupcake, Doughnut, and Eclair, working perfectly all the while. It's completely possible if the manufacturers aren't sheer jerks. That enthusiast work combined with Open Source code kept me from creating at least 3-4 smartphones worth of ewaste by extending the life of my device, and it functioned better for most of that time that it originally had out of the box.

There is also decent enthusiast/volunteer support for things like routers and other gadgets out there.

The only reasons this isn't common are paranoia and desire for profit over all other customer and environmental welfare concerns.


> The suggestion that an individual or a community could actually build, let alone maintain, any of these OS distributions for older hardware is disingenuous.

Well that's just objectively false, unless I'm misunderstanding you. I'm writing this message on a phone running an open source Android build, which I reasonably and with precedent expect to keep getting updates long after the manufacturer has abandoned it. The thing you say is impossible is done every day.


While I think that a lot of the "right to repair" and "right to tinker" folks dramatically over-estimate the value provided - just look at the complaints about self-repair of iPhones now that it's possible! - projects like the CHDK firmware for Canon cameras show that motivated communities can certainly do amazing things with devices.


Microprocessors in embedded devices are getting pretty capable. I've been asked to port FreeRTOS and Linux in about equal measure.

What gets me is, if the schematic designer would annotate the schematic with address, busses, chip pin uses, then I could write a script to port a new design to a particular OS automatically. My job would be obsolete.


Just curious, porting Linux or other OS to different chip sounds like a daunting job. How do you approach the task? And I guess it's mostly about the kernel? Thanks!


A different chip is a big job.

A different development board with a familiar architecture (e.g. ARM Cortex etc) is a matter of loading drivers for all the parts connected on the board. For that you create a 'device tree' which is a blob of text similar to javascript or json data definition, with many particular conventions for pins, addresses and busses. See any document on Device Tree formats e.g. from Freescale.

It gets compiled into a .dtb blob which is 'device tree binary' which is flashed into the board next to the os. The bootloader or uboot image expects it in a certain place, loads it into memory and provides it to the booting kernel.

Of course you need to know about the device tree, to use most devices. That's why you have to have an extra step to load - the bootloader is a flash image that has hardcoded a tiny subset of device information just for initial load.

That's all I can say without launching into chapters of details!


Thanks. Interesting process. I guess for new chips especially of other architectures it might require a lot of rewriting of the underlying code.


Device Trees seem like such a step backwards compared to how it is on x86_64 boxes, where searching and discovering all your hardware has long since been standardized.


PCs are far more functionally standardised than embedded systems. And in fact modern PCs do use something similar to, but in fact far more complicated than, DTs - ACPI tables which not only contain static data but also callbacks to let system firmware do some of the work. Even on a PC there are still many things that are not on discoverable buses but I2C or SPI.

The only reason it seems simpler on PCs is because the ACPI tables are supplied by the hardware vendor so they're mostly invisible to the end user. But that only works because they're mass produced and the variability between different manufacturers is fairly limited. Microsoft and Intel specified what the system firmware must do and enforced it through certifcations. Most PC firmware is also only tested for the case of booting Windows and Linux often emulates bugs...

The same wouldn't work for embedded systems which need far more flexibility.

Yes some ARM systems these days use UEFI and ACPI but that is for server hardware which is basically a PC where the x86_64 processor has been replaced by ARM and it is desired to have it otherwise work the same.


PCs have a device tree too. It's just called ACPI. Everything post PnP relies on static descriptors to discover hardware.


Yeah, but those descriptors are baked into the hardware so you don't have to hard code it into the operating system


Those tables are often wrong. That can cause all sorts of issues and a big reason why Linux tends to have sleep issues and why windows modern standby exists. When that happens you're either maintaining workarounds on the kernel side or patching the acpi table yourself (terrifying, see this example [1]). The hope is that the end user will eventually apply firmware updates that might not exist for months or years. With device trees, the kernel applies a patch and it rolls out with the next update.

[1] https://www.reddit.com/r/XMG_gg/comments/ia9x6c/fusion15_lin...


The main thing is there's relatively little incentive to standardise: the kind of hardware you write device trees for almost always isn't the kind of hardware you sell to customers to just load whatever OS they fancy on it, it's generally a special-purpose device which is intended to run one software stack. The fact that there's even a standard like devicetree is basically just the linux maintainers trying to avoid hardware companies writing a million custom initialisation routines. (AFAIK there is also some standardisation around providing a devicetree to the OS from the bootloader for ARM servers, where there is such an expectation from customers, though ACPI is also often used)


Yeah, I guess it's to be distributed? There's no central driver repo to refer to.


I have a calculator that uses a SuperH CPU, which is already supported by Linux, but drivers for things like the display won't be. It only has 8MB of RAM and 16MB of storage which could be restrictive. Would porting Linux to it for fun/learning be a viable thing to do as someone who knows some C and has done some low level programming, but hasn't worked on Linux before?


Not OP and haven’t ported linux in a while but:

Obviously some details are going to depend on the device, but it’s essentially:

- set up a (cross) compiler

- configure&compile the kernel

- write the device tree (so the kernel knows what/where everything is)

- compile the boot loader (with the device tree)

- cross compile the minimum userspace for the system to function

- bundle everything so you can install/flash it.

Yocto [0] is what most people use to manage all those steps atm.

> And I guess it's mostly about the kernel?

Afaik it’s mostly writing the device tree (though it’s often provided in a board support package) and setting up the toolchain.

[0]: https://www.yoctoproject.org/


Thanks! These are all strangers to me I think, except the cross compiling part. But it's good to know some pointers and go from there.


Stupid question: which jobs port Linux or other OS or other sys software for a majority of their time?


Embedded systems or firmware dev


Thanks! Gonna take a look of job description and skill requirements.


Not to subtract from GP's experience, but here's a good journey of someone porting linux to an older device (Chumby) https://hackaday.com/2022/12/21/chumby-gets-new-kernel-soon/


Thanks, this looks like an interesting project. I'll probably start small and get something into my Pico (without any modification as I know some people already did this) and dev from there.


There is precious little open firmware out there already, but even the stuff that is open has various problems; builds require proprietary tools, the hardware requires vendor signatures etc.

https://wiki.debian.org/Firmware/Open


This is going to be the openWRT argument all over again. Linksys mid-tier and low-end hardware were somewhat differentiated by what the plastic looked like and which firmware version you had on the box. Most boxes got a lot of new functionality once you installed the open firmware. And once Cisco bought Linksys that whole problem just got worse.

So they won't want to do it.

Anyone doing Consumer Protection or Right to Repair should - at a minimum - request that all of the firmware for devices be transferred into escrow, to be released upon end of life for the product. Ideally that would be released when the product ships, but I don't think you'll be able to get that out of the gate.

What I'd really like is for all appliances to utilize a standard microcontroller design and pinout, similar to arduino or raspberry pi, and day-of-release firmware availability.


Most of these devices are running Linux, so I would say the fact manufacturers are not shipping "the scripts used to control compilation and installation of the executable" at time of sale or promptly upon request, is illegal already.

The GPL2 actually foresaw developers failing to ship critical bits that are required to run the software. There's just little will to enforce it.


I was researching open source hardware and was surprised to find very few offerings: SiFive (RISC V) and Raptor Computing Systems (POWER9). Are there any other ones?


MNT Reform looks like an interesting open hardware solution.


It looks like boot ROM source code is not available [1], and looks for blobs during initialization [2]. This thread makes it sound like ARM is a complete write-off in this department. [3]

[1] https://community.nxp.com/t5/LPC-Microcontrollers/Where-may-...

[2] https://forums.puri.sm/t/the-i-mx8-cannot-be-deblobbed-nxp-s...

[3] https://www.reddit.com/r/ECE/comments/9oarto/arm_socs_with_o...


I've worked on firmware for mass-market consumer products and one point the article doesn't raise is that firmware often supports company intellectual property such as specialized hardware, ASICs and other custom circuitry. I'm all for separating out the upper levels and allowing users or enthusiasts to add their code. However, the explosion of cool gadgets in our pockets didn't happen because of tinkerers, it happened because massive piles of money could be made. Let's not disincentivize hardware development that requires billions of dollars in up-front investments.


"Now many devices have enough system flash on board to hold the complete stack, firmware now includes complete operating systems and has come to mean that software at the heart of your technology that controls its behavior and which you can't just load in as an app.

So why lock it down?"

Unclear with respect to intent. However a practical effect is IMHO to deny the computer owner control over their own computer. Whatever control the owner had through the user-facing operating system is now superceded by a non-user-facing operating system. Whomever controls that operating system controls the computer.


First of all, firmware bugs can physically damage some hardware, some might even be dangerous to use (e.g. overheat and catch on fire).

It just increases support costs and liability for the manufacturer for little or no gain.

On the security front, having the source code leaked makes it much easier to develop APTs and deep implants.

So it's easy to see why it's not something most manufacturers embrace willingly.

The nerd in me really would love to have access to more open. firmware, but I get where the apprehension comes from.


> First of all, firmware bugs can physically damage some hardware, some might even be dangerous to use (e.g. overheat and catch on fire).

Doing random crap to my car's engine when I have no clue what I'm doing poses all of the same risks if not more, I can still do that. It doesn't mean it's covered by the manufacturer's warranty.

> It just increases support costs and liability for the manufacturer for little or no gain.

You got a device sent in from a customer that bootloops because they flashed some garbage? They can get it back for a service fee of 20$ or you'll dispose of it for them.

The case deforms where the camera is mounted and dust and water can get in (hello Pixel 4)? Replace it since this has nothing to do with firmware but is a plain old manufacturing defect.

> On the security front, having the source code leaked makes it much easier to develop APTs and deep implants.

See article. Normal users get firmware through official channels anyways, and targeted (supply chain) attacks are run by people who either already have access to the source code or got the resources to do the reverse engineering, so it doesn't really matter whether it gets easier.


There's actually an industry, based on modding car software. I have a friend that does exactly that, as a hobby.

They mess with the firmware (usually by changing coefficients and thresholds), to make the car more performant (and probably fall foul of various environmental and safety laws).

And, then, there's VW's clever tweaking of their (completely legal) firmware...


And, especially in European cars, car software modification has been aggressively locked down, mostly for emissions compliance, anti-reversing, and warranty reasons. Just like any other device firmware.

The "I can modify my own car, so why can't I modify my own device firmware" analogy for firmware really falls over here, honestly, because cars are firmware these days and it's locked down and protected the same way as any other firmware.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: