Hacker News new | past | comments | ask | show | jobs | submit login
We achieved a 6-fold increase in Podman startup speed (redhat.com)
403 points by DrinkWater on April 13, 2023 | hide | past | favorite | 189 comments



I'm all for improvements to pod startup times etc, but the general idea of putting more software into cars is not that appealing. I recently broke down in the highlands of Scotland in a fairly new car with the family - it was a horrible experience. It was made worse by the fact that there was nobody close that had a clue what to do with the car. The breakdown service arrived promptly, plugged the diagnostic tool into the car, proclaimed it broken, called a tow truck and left - two days later we arrived home. Had I been in a less complex car, a local garage could most likely have fixed the problem and sent us on our way. The sophistication and gadgets in modern cars are great until something goes wrong then they fail hard. Small local garages that used to be a life saver are next to useless now as they don't have the tools and knowledge to fix a mobile data centre.


More software is fine IMO. More software on critical path ain't.

Have the ECU only do the engine thing. Have the AC control just do AC control. Decouple dependencies and make it as simple as possible. Old cars already do it. Blinker switch send signal directly to light controller, not to some central box deciding what it should do with it.

If something needs config in addition to control signals, have it keep it own config and only be updated from the "config manager" (inforatinment box). If infotainment box dies, everything else still works.

Cars already are basically "microservices on a message bus". Let's just use what works with that - minimal coupling and maximum independence of "services"

> Had I been in a less complex car, a local garage could most likely have fixed the problem and sent us on our way. The sophistication and gadgets in modern cars are great until something goes wrong then they fail hard. Small local garages that used to be a life saver are next to useless now as they don't have the tools and knowledge to fix a mobile data centre.

Out of curiosity, what was the issue ?


According to the car, the stability control system wasn't working. Cause? Obviously a broken fuel injector. The stability control system talks to the engine ECU to control the torque if there is a lack of traction - it is notified of this by the ABS computer. Broken Injector=No ability to manage torque, hence traction control warning. Sitting here now, that makes perfect sense. In horizontal Scottish rain - less so!


Unless I’m missing something, it seems that this information largely invalidates the thesis of your previous post. A critical component (fuel injector) failed, the software in the car prevented it from running and causing catastrophic damage. Roadside assistance came, immediately determined it can’t be fixed on the side of the road and towed the car. Seems like a best-case scenario given the circumstances other than possibly the red herring related to the stability control.


Assuming that the fuel injector wasn't stuck open, the car could simply disable the affected cylinder and continue to run (poorly) in limp mode. I had exactly this happen in a 20 year old VW and it turned out that the injector was fine and the connector had just come loose. The engine sounded awful running on 3 cylinders and wouldn't go past 3000 rpm but there was no permanent damage. The fault code in the ECU correctly identified the problem (fuel injector cylinder X open circuit) though it did also log misfires and disable traction control.


Oh yeah, less computers in car wouldn't fix it.

Sure, old carbie with distributor might've just ran with 3 cylinders , but that also might damage something.

Also auto makers don't really want to give user sensible error messages or even just metrics because without experience they might just misinterpret it as different problem.

For example if car have oil pressure gauge it is either nearly fake or heavily filtered. Oil pressure changes according to load but gauge going up and down might cause user to think something is wrong with car...


> Sure, old carbie with distributor might've just ran with 3 cylinders , but that also might damage something

A car is not an iPhone - if the car can move at all, it must move.

The alternative could be freezing to death. What if I am driving in rural Siberia, or Canada, and there is no phone signal to call for help?


Not all cars are made equal. Just as you don't go to a cross-Sahara race in a car you don't know you, you don't buy a Prius to go logging in Alberta or Yakutsk.

Sure, that doesn't necessarily invalidate your argument, after all this increase in car complexity (through "electronization" and smartification of more and more components) without the increase in debuggability/repairability is IMHO a bad trade-off for many consumers.

Case in point, our second-hand 2011 Ford Focus has a problem with the electronic steering assist. Apparently it somehow experiences some kind of over-voltage and the internal system shuts down. It's likely due to humidity. (So probably it's simply a design/manufacturing/QA issue.) Okay, but there's no way to get the actual data from the integrated electronics from the steering system, but it's possible to reflash a different firmware on it. Which resets the internal data. Which basically clears this error state, and the car will happily use it.

But there's clearly a mechanical error, there's a new "bad" noise when turning the steering wheel. But it's a 10+ year car, rarely used, and replacing the steering system is about ~1000 EUR, doing the firmware flashing was ~30 EUR. (Finding the guy with the laptop, who can flash the firmware through the good old ODB port was the challenge.)

And it's basically a big (market) information asymmetry problem. The car industry wants to sell more cars. Sure they sell some parts, but the more repairability a car has the less parts it really needs, as consumers can make their own tradeoffs.


My car has a button for traction control. If it’s not working I would expect it to turn itself off and ding, not just halt the vehicle.


A bad fuel injector should usually be a reason to stop driving. Depending on the type of damage it would likely damage the piston and/or cylinder fairly quickly if you attempted to keep running it.


That would obviously depend on the failure mode. But there certainly are failure modes which could be quite damaging, and an ECU may have limited ability to determine what failure mode is occurring, and even if it has sensors that can indicate certain failure modes, it is not always clear if those can be trusted, as they there be additional failure modes that make sensors give misleading results.

So shutting it down certainly seems sensible.


There are situations where you must run the car, even at the risk of damage, because waiting for help is dangerous to the occupants.


Cars have a “limp-home” mode which they enter if sensors show odd yet not critical errors. Usually it restricts the acceleration and top speed to 30kmph or so. If it totally shut down it was likely a very serious error.


But what cash grabbing opportunity would the dealer have in that case?


That sounds horribly familiar! (Old, relatively non-fancy, Ford Focus; it limped along with the failure.) The explanation makes sense, which it didn't at the time, and the traction control button didn't help. The specialist garage initially said "sensor failure", as I assumed, having lost my OBD device.


You know what this car needs? Kubernetes.


I know you are joking but I am not sure if you are aware of how close you are to reality:

https://thenewstack.io/how-the-u-s-air-force-deployed-kubern...


Literally taking the software to the cloud


wow this is terrifying lmao


That's a bit overengineered, come on, really it just needs docker compose.


Or Erlang... oh wait, that would actually work. We don't want that.


A car built on Erlang would break down every 11 seconds but would immediately fix itself so you never notice anything's wrong.


Supervisors should set Check Engine.


Agree. I am not buying any car not running all software components as Spring Boot micro services in kubernetes cluster as a standard cloud native service.

If this setup can run my 95% uptime enterprise apps, I am sure as hell it can run my car too.


Most infotainment is already shit code. If same people use more tools the result will just be worse, if they can't even handle a monolith


Carbernetes. Now with reinvented wheel functionality.


and a few years later, we get Injecternetes … though I'm not sure if that will be the name of an improved fork… or the name of a security vuln exploit.


Lol. Reminded me of this - https://youtu.be/cfTIjuW6SWM

Fwiw - I am a kubernetes fan. Just not in cars.


If you're a k8s fan but don't consider it reliable enough to do anything even next to but independent from safety critical systems then that's not exactly a glowing recommendation.


Different tech has different needs. I can see it being really great with server side distributed systems. I don’t really see it having any benefit for running stuff in resource constrained single purpose environments


And yet it's true. Reliability never comes from unnecessary complexity.


Well Volkswagen/Cariad is certainly flirting with that idea. https://datatronic.hu/en/containerisation-in-automotive-indu...


Make sure to run at least 2 clusters in case one goes down


Watch us slowly reimplement Erlang on top of OCI.


Erlang but language independent is actually a solid pitch.


To be precise an incomplete, bug-ridden, undocumented reimplementation of Erlang.


I keep having to stop myself from implementing an incomplete, bug ridden reimplementation if half of Erlang on top of NodeJS so I see the attraction there.

Worker threads have a garbage API and I keep finding myself wanting to have n processes sharing m workers and there’s just no easy way.


I know you are joking but it’s actually not the worst thing in the world.


Can't wait to have to restart a kubernetes cluster to get sensor metrics from the engine to start again.


You'll need to replace your kubernator!


> More software is fine IMO.

A lot of software is created on powerful developer machines. But fill up a normal consumer machine with this software, and you start to notice that it maybe isn't so fine.

This is how things like Electron come to exist. I'm sure Electron works fine on developer machines, but once it trickles down to someone's cheap Celeron netbook, it runs worse than retro computers with 384KB of RAM.

Does it really have to be this way? Is more software "fine", if the same could be accomplished with much less code bloat?

P.S. As far as I've heard, one of the best ways for developers to combat this is to target your software for cheap netbooks proactively; test compiled artifacts there rather than on your powerful development machine. If you can make it fast in that situation, it'll be fast pretty much anywhere.

I once met someone who had optimized their DOOM clone using this method, and they claim to get millions of FPS on any vaguely modern machine, just through optimizing it for cheap netbooks.


> More software is fine IMO. More software on critical path ain't.

Based on the story in the parent, it sounds like this was precisely a problem with software on the critical path, otherwise local mecs/breakdown service would have been able to fix it rather than give up.


economically, this doesn't work. The cheaper cars will always be those that roll all functions into a single cost center. This is why cars wind up with a horribly awful touch screen in the center for controlling almost everything about the car's function.


The ECU wasn't doing the engine thing no more.


graceful degradation ftw


> The sophistication and gadgets in modern cars are great until something goes wrong

I'd go further than cars and say, "in most things". Smart-anything, washing machines, printers, sewing machines, thermostats, appliances in general...

My mother in-law has two sewing machines. One of them is one of the first electronic sewing machines (from the 70s) and one is much older. Guess which one still works like a charm?

I'm not arguing against electronics, here-- many of these things are no doubt improved by electronics to such a degree that the tradeoff is worth it, but it's good to at least acknowledge that there is a tradeoff. It's also good to try to minimize the impact of electronic failure. Smart things would ideally just revert back to being functional dumb things (rather than bricks) if their electronics fail.


If something like that needs to be smart the smart part should basically be extra interface. Old printers did it right - separate extra box for all the connectivity working as print server. That breaks ? just connect it directly.

But hey, feeding everything from single microcontroller is $2 cheaper...


I went to a "tech school" to learn computers while in High School in the 90's. The tech school also had classes for 'the trades', it was set up to prepare Michigan kids for careers (Careerline Tech IIRC).

Anyway. A big part of that class was learning to clean, repair, and manage printers. Again, it was the 90's, and we were high school kids. We came out quite capable with many computer skills but the printer stuff really stuck with me. I've done technical support throughout the years and have setup hundreds of printers.

The printers of today are awful landfill fodder compared to the Okidata's of the 90's. Pure simplicity and speed vs FULL COMPUTERS, with scanning, faxing, and every other imaginable feature crammed in with zero hope of doing anything other than replacing the toner.


> The printers of today are awful landfill fodder compared to the Okidata's of the 90's. Pure simplicity and speed vs FULL COMPUTERS, with scanning, faxing, and every other imaginable feature crammed in with zero hope of doing anything other than replacing the toner.

The first Laserwriter in 1985 had more processing power than the Macintosh it was sold to accompany.

Printers have been full computers for a long time now. As we expect them to do more and more, the computers in them get more and more complex.


> As we expect them to do more and more

Who does? Who asked for updates blocking third-party ink, 1GB "drivers", full-color "test prints" each time you switch it on, ...?

Printing reliably doesn't sound too demanding, manufacturers reached that point long ago, and since then I haven't seen all that much groundbreaking innovation. Sure, things like wifi were added but that doesn't require cutting-edge technology - consumer devices could handle that 20 years ago, and more reliably than the printers I've used. I also haven't heard of anyone being excited about NFC in printers, and from experience I can say it's not nearly intuitive or frictionless enough to warrant the integration.


> Who does?

The majority of my printing happens from my smartphone, so my printer needs to be on wifi, and needs to be able to reliably print from Android and iOS.

Accordingly, it needs firmware updates because phones break how they work all the time.

> things like wifi were added but that doesn't require cutting-edge technology - consumer devices could handle that 20 years ago

Not just wifi, multiple protocol for connecting to printers. Also that wifi needs to be 5ghz so I don't have to switch over to a 2.4ghz legacy network every time I want to print (which I had to do with my previous 2.4ghz only wifi printer!)

The onboard touch screen + embedded OS means I don't need to set anything up through a computer or smartphone app.

FWIW I have a black and white laser printer from Brother, I've never had to install a driver, I just plugged it in, typed my wifi PW on to the touch screen, and after a firmware update on first use it has happily been allowing anyone connected to my wifi to print w/o any hassle.


> The majority of my printing happens from my smartphone, so my printer needs to be on wifi

It needs to be on your home network, but it doesn't need to be connected to wifi per se. Ethernet works fine, including ethernet to a wireless mesh AP.


My smartphone does not have an ethernet port. :)

My house came wired for cat5 (the original cat5!) but modern wifi is a lot faster than 100mbps, so I just use wifi for everything.

Latency is higher, but so is the speed.

Also I only own 1 desktop that has an ethernet port, and I haven't plugged the desktop in for 2 years.

I would actually like to have the TV hooked up to ethernet, since its wifi chip crashes every few days and I have to power cycle wifi in settings, but whoever wired the house for cat5 didn't install ports anywhere, although they did install a large patch panel in the basement, but I have better things to do than crimp a bunch of wires to fix one flaky connection.


That's the nifty thing about my second point, with a mesh network - if you put your mesh APs in spots where you have a bunch of wired-capable devices, you can plug everything into the mesh AP and then all the data runs over the mesh AP's high-end radios.


I fixed this by adding a printer server to my NAS and use that for AirPrint and the like. Smart power socket to prevent the printer from drawing power all the time. No need to have a smart printer.


Most people don't have, or want to maintain, a NAS + print server, having the print server software built into the printer is perfectly reasonable for a consumer product!


In my experience everyone that has an old trusted printer that they don’t want to get rid of do have enough hardware running for a server and using it like that. Obviously it’s not for everyone.


People used to buy HP Laserjet 4 printers at auction because they were peak stability. From the look of things the 4 introduced the direct predecessor to the wire protocol printers use today (PCL 5e vs PCL 6 variants)

https://en.m.wikipedia.org/wiki/Printer_Command_Language


Our helpdesk offered us to take our LJ2100 and get us something newer.

Many insults were thrown. He didn't try again


There were a ton of companies selling refurbished or knockoff toner cartridges for those things too. As good as the LJ was, the fact that they had easy access to cheaper supplies just accelerated the process of selection.


> The printers of today are awful landfill fodder compared to the Okidata's of the 90's.

Maybe. But how expensive were they?

I can buy a good laser printer for under $200 these days. It will be more compact, lighter, mechanically simpler and use way less power than older printers. Something has to give.

Some older printers were really overengineered (which in many cases did make them more reliable), but that has a cost. Turns out, consumers didn't want to pay those costs.


My grandma had an original "Montgomery Wards" microwave.. they saved up for months to buy it when it came out. Complete with a rotary dial timer.

Never serviced, always cooked perfectly. She had to get rid of it around 2006 or so, when she got a pacemaker.. They didn't shield them as well back then...


> My mother in-law has two sewing machines. One of them is one of the first electronic sewing machines (from the 70s) and one is much older. Guess which one still works like a charm?

There's a bit of survivorship bias and N=1 here.

I'm old enough to remember machines full of relays and discrete components that failed pretty often and required a lot of troubleshooting with schematics on hand. Modern appliances – if built out of decent components – have a much better shot at surviving long term. Less discrete components that can fail, more debugging capabilities, logic that's implemented in a rock solid processor rather than an unreliable mess of digital gates (or worse, analog logic).

There's obviously a point where there are diminishing returns, and probably another one where more complexity actually decreases reliability.

> Smart things would ideally just revert back to being functional dumb things (rather than bricks) if their electronics fail.

If possible, yes. That's only really an option for simple devices.


You said it so well, cars have become too bloated with software that barely any local mechanic would want to touch it. Happened to me and this is the major reason why I am slowly shifting to older cars, they are way easier and cheaper to repair.


Its not just software though, even the lights on your cars now are not easily user serviceable anymore. New cars with LED lights built in have core charges/deposits attached to them, they cost multiple hundreds of dollars and if you want to get your core deposit refund you must return the light. Compared to 15+ years ago, you go to your local store, buy a new light for $10-30, replace it and you're on your way.

Ford apparently ended the core charge program for lighting in 2020, but other manufactures continue, and that is just one thing that was common for users to service themselves in the past. It's not going to get better.


On one hand, it's okay. Cars are becoming a service, which they are anyway. Most people want to get transported, they don't want to drive, nor they want to maintain a car. Collective interests are pushing the whole industry toward this. (The goal of decreasing emissions through the whole lifecycle/value-chain, more safety for everyone involved, not just for those in cars; EV-ification itself pushes everything toward consolidation, as cars become simpler, but more one-time CapEx intensive, as the battery costs a lot, and then it just works for a million miles. AAaand then the whole need/goal of densification of cities, more public transportation, etc.)

On the other hand right to repair is very important. Walled gardens suck. Still hundreds of millions of people live in rural areas in the so called developed world, etc. And I don't want to subsidize the industry, I'm willing to pay more up-front, if it means I can just to replace the fucking light bulb.


Same here - I have owned an old petrol Vauxhall for years as my runaround car and it's so much easier to deal with. The big issue I have with newer cars is that the computers mask any developing issues until they get to a point when they just give up. A less sophicsticated car starts to just feel different a long time before it outright fails.


The MkIV diesel Jetta I had seemed to fall into limp mode at the slightest provocation, but never left me stranded.

Trust me, it is a difference you can feel.


IT people with mechanical sympathy aren’t exactly an endangered species but we are rare enough that it becomes a bonding exercise (leatherman knives or pocket flashlights are our shibboleths).

If you’re a kid and you also have IT skills you’re going to be interested in IT unless there are extenuating circumstances, like wanting to stay rural, or a family business, or friends and family with union influence. Easier on your body and pays at least as well. So a car mechanic with heavy IT or electrical skills is going to be in short supply. Which is a problem when all cars are electrified.


I drive older Toyotas. These cars will never go into the landfill if I can help it.


But cars from Japan & Korea have always felt reliable no matter the age.

At least from my experience.


Heh, we were stuck in a carpark for 3 hours because the 2025 battery cell in our key remote went dead on a cold walk around a local lake.

It's supposed to have a backup but, like most backups, I hadn't tested that it works and for some reason the RFID reader part wouldn't connect with the car.


Are 2025 battery cells batteries from the future?


assuming this is a cr2025 then most likely 20mm diameter and 2.5mm thick => 2025


this is the most useful thing I've learned so far this year


Yeah almost certainly talking about a CR2025.

Use of a 2025 cell would be rather irritating to me, because in my experience CR2016 or CR2032 are both more common, and it seems like it should not be hard to fit a 2032 into most keyfob designs.


A type of battery the small round one.. not the year


Thanks, that makes much more sense


Hmmm... maybe this thread will have an answer to the question that has been on my mind for a while. Sometime in the next 5-7 years, I think I'm going to be in the market for a new car. Are there any manufacturers out there whose niche is "dumb cars"? If so, they can have my money.


I for one am very happy that RedHat is putting in these efforts to improve startup time of podman. This is extremely necessary if our industry has to survive. Last time we tried using podman in our product, it was such a performance mess. We had to completely abandon our product. Our product would have disrupted the entire juicer market if podman was efficient at that time, our's was the only juicer which had containers running in it.

Hoping to get back to it once this version of podman is released. Thank you RedHat team; we'll send you one of our juicers as a thank you gift.


That's imo a right to repair issue. We can build easily diagnosable and easily fixable hardware. Big Corps just don't


It's also a liability issue. If a company allows tinkering with the software in the car it opens itself up to massive lawsuits.

If we have a right to repair here, we also need to see how to handle liability here. If you flash your own software on the motor controller and subsequently mow through a group of people because you forgot to do a plausibility check on the accelerator pedal value who takes responsibility then?

Even if you just get the original software, how do we ensure you flash it correctly?

If you get the schematics, how do we know used the right parts that are rated for 125°C temperatures.


Lol, this is absolute funny, every example you came up with has already been there for years. Aren't cars being modded every day, ECU tuning, engine mods, etc? Go ahead sue the company, companies aren't some innocent babies, they can afford to quickly dismiss the claim by just pointing towards the mod. Auto Companies have never been held liable for a car that has been modded. Does it waste money to be sued? Yes! But does it save a lot of money for consumers and is much better for the environment? Yes! If companies are greedy/selfish about their profits then consumers don't need to think about how right to repair hurts those companies.


The EPA has suggested they will hold companies liable in the future. It hasn't happened yet, but they are hinting. If it is just one hobbyist they don't care, but there is a whole industry of chip your diesel truck and those chips clearly increase pollution. A modern diesel truck doesn't emit black smoke, but a large % of the diesel trucks you see are "rolling coal" which is a sure sign that someone has disabled the emissions controls.


> The EPA has suggested they will hold companies liable in the future.

The EPA can suggest all it wants. Holding one entity liable for the actions of an entirely different entity beyond the control of the former's is asinine, and I can guarantee you these automakers will gladly sic their armies of lawyers at a Supreme-Court-bound case and/or their armies of lobbyists at legislatively castrating the EPA if the EPA made any such attempt.

On top of that, the EPA is virtually irrelevant for EVs, and yet EVs are just as locked down (if not moreso), so I don't buy the "EPA might punish us" argument for that reason, too.


Not disabled the emissions controls, but deliberately remapped it to grossly overfuel at large throttle settings.

Why people want to trade off power for a big cloud of black smoke is beyond me, but there we are. If they want to get 50bhp from a nine litre engine, that's their concern.


"ECU tuning" is just fiddling about with some values in a lookup table, though.


The liability excuse is a lie told to you by companies trying to increase their profits.

Who is liable if you tweak the software on your 2023 Mercedes? The same person who is liable if you tweak the hardware on your 1987 Chevy. There's plenty of precedent on how to deal with this.


> If you flash your own software on the motor controller and subsequently mow through a group of people because you forgot to do a plausibility check on the accelerator pedal value who takes responsibility then?

I would, obviously, for making the unsafe modification. In what multiverse would the manufacturer be liable for something entirely outside the manufacturer's control?


It should be, but right to repair is mostly not about diagnosis and repair. Instead you are seeing a bait and switch where someone wants to disable emissions controls and claims that is a repair.

What can a mechanic not do to a modern car with the standard scan tools and training? Most old school mechanics still lack the training to work on computers, but those that have that training have no problem fixing cars.


> It should be, but right to repair is mostly not about diagnosis and repair. Instead you are seeing a bait and switch where someone wants to disable emissions controls and claims that is a repair.

Well first of all that happens even under the current draconian anti repair setups already and secondly thats a felony. Just because you can do sth illegal doesn't mean we should child proof our whole society so you can't do anything anymore just because someone MAY do something illegal.

We still allow you to buy knifes, in some places even guns.


If you had a little more software in your car it could automatically remediate the issue and you'd be on your way with no repairman involved or at least tell the repairman exactly what to fix. Maybe you could fix it with the step-by-step workflow on your console.


Half the time there’s a light on my car, it’s a damn sensor! More components mean more points of failure.


Check Engine Light being on comes standard


Mechanics call this the money light. :)


Indeed. When I took my car in last winter because the light had come on for no apparent reason, as the car was running fine, they charged me $140 to take it out on the road to try to find the reason. No reason was found. Two weeks later, the light came on again (towards the end of 2022). The car was and is running fine. The light will remain on until July when I take it in for a scheduled oil change.


Sounds like it was an intermittent fault? If so, those are notoriously hard for any mechanic to diagnose. Cars _usually_ store code history but it's up to the manufacturer and often the data they provide to the mechanic with a scan tool is misleading or an outright pack of lies.


Or you could buy a $20 code reader and see what's causing the light yourself.


But the mechanics using their professional grade code reader and related equipment couldn't figure out what's causing it. At least I'm comfortable with "unknown" after that's their diagnosis; I don't think that would be the case if I did it and got that result.


> it could automatically remediate the issue and you'd be on your way with no repairman involved or at least tell the repairman exactly what to fix

Nah it would figure out what is the best time and place to break, order you an uber, and Uber would psy you manufacturer for the order flow.

It would also show you ads while you wait


You take out your picnic basket

'cos the car has blown a gasket

in the middle o' a place called Rannoch Moor

https://l-hit.com/en/143370


Haha - that summed up my day perfectly!


>a local garage could most likely have fixed the problem

So, what was the problem?


Supposedly the electronics were too complicated for a shop to diagnose and fix the car on the spot, but I imagine the real problem is that diagnosing non-obvious problems is tough for anyone to do on the spot because all mainline service centers for the big manufacturers are weeks behind and can't just squeeze in the 4 man hours it might take to tear down and diagnose random problems. Older cars and systems were cheaper and easier to diagnose, but they also probably broke down once a year or more, while I had no issues with a 80k miles-in-2-years 2018 Honda Accord and still going strong with a 60k mile no-maintenance 2020 Tesla Model 3 SR+.


> If the backup camera or other sensors were to run as containers, we needed to improve the starting speed significantly.

I think I see the problem already. Why does anyone think its a good idea to put everything in an embedded system into a container? Particularly as everything comes from a single vendor and so the usual argument about "but libraries are too hard!" doesn't apply.


To be honest I think we should be adopting the full ecosystem we've been busy building around containerization.

Imagine calling up breakdown assistance because your car won't start, mechanic comes out, cracks the hood and is like "ah there's your problem right there, ignition service has only 1/2 pods healthy because the node went into NotReady due to DiskPressure. I can clear up some log files so it goes underneath 80% disk usage again but sucks teeth it's gonna cost ya. I'd recommend throwing the whole car out and getting a new one. You shouldn't get an attachment to these things, they're cattle not pets."

Truly breathtaking.


Looks like your cars Kubernetes certificates have expired after a year, we'll need to SSH in and run kubeadm to refresh them. Wait, the 5G pod isn't starting ...


You joke, but I decided I wanted to try out Kubernetes, so I set it up at home and moved some of my local services into it, one of them being my custom lighting automation software. Eventually my lights just stopped working, and after a lot of rummaging it turned out to be expired certificates. I promptly put the software back where it was before, removed Kubernetes, and decided there were better things for me to play around with.


Can't wait to plug in an ODB2 scanner and get a k9s screen.


I agree. Microservices are a great concept for a lot of problems. But now I constantly see way too small services. Every tiny piece of software gets its own service and its own software lifecycle (including versioning and deployment).

And what happens now is, that you need a huge effort to integrate all those components. End to end system tests get much more important, but are still harder to do than simple unit/integration tests. traditional testing strategies start to get pointless, because most bugs now only appear when combining services in a production setup.

Yes, development gets easier, because every team can just develop, without aligning too much with other teams. But the deployment/ops/acceptance step often gets impossibly complex.


You are blaming cultural issues on technology. You have the same integration pains with libraries too. In general if your teams are not aligned, it doesn't really matter if they are developing microservices, libraries, or one spaghetti ball mess, its going to be problem anyways


The old method of forcing everyone to freeze at certain dependencies also means that any bug in those dependencies is fixed in all components at once.

Obviously going too big here is problematic as it can slow it down when tens or hundreds of people are involved in every update, but going too small have similar problems, on top of generally more smaller services eating more resources. We don't need "front reflector LED setting app" being called from "lighting setting app" called from "car setting app", it can probably just be one service.

Smaller services also mean more services to update if some commonly used lib gets a security bug. Updating SSL lib in big monolith is just update, run tests, but in microservices that's multiplied by amount of teams and services.

> In general if your teams are not aligned, it doesn't really matter if they are developing microservices, libraries, or one spaghetti ball mess, its going to be problem anyways

Moot point. We pick the tools to make the job easier. Good team with bad tools will still be slower and less efficient than good team with good tools.


> You are blaming cultural issues on technology

Because technology almost always carries cultural values with it.


Excessive microservices is lazy engineering. It doesn't solve any problems - they just get moved to the next quarter, or some other team.


Do you oppose running camera software in a separate process? I think it makes sense. Camera process might crash and will be restarted, this should not cause restart of the entire shell.

What people should understand is that container in Linux is just a separate process running in powerful chroot (which isolates not just file tree, but also process tree and other things).

So the same reasoning which applies to running some code in a separate process also applies to running some code in a separate container.

I'd even argue that in an ideal world, almost every process should run in a separate container. The tooling is not here, but concept is just fine.


Containers usually ship their own libraries, which means less sharing, more disk usage and higher memory pressure.


That depends on implementation. Shared libraries which use the same inode will be shared AFAIK. If containers use different libraries, they'll not be shared, of course, but that's a deliberate choice of container creator.


Containers can use the same base image for the OS.


I see these as a relatively straightforward set of problems to identify, quantify, and remedy. It’s a tradeoff between static memory usage and stability. If that additional memory footprint becomes an issue you can make plans to align dependencies.


And much lower chances of getting security updates, now that everything is a huge blob.


It's not just the camera process though - containers ship an entire OS (except the kernel).


Though you don't need most of the OS - you may run bare bones containers with just a statically built binary inside the image, e.g. it's possible in Go.


I think if you open up a Tesla infotainment system you would find something eerily similar to containers. Remember that what you and most programmers think of containers is merely one possible assembly of a bunch of kernel features. There exists not just a gradient between plain processes and containers but a whole solution space with different tradeoffs.

I happen to know a small amount of the Tesla internals and they are using cgroups, namespaces, app armour and ebpf based syscall filtering to secure various processes on the car.

You almost certainly should not use docker or podman to manage processes on a car but that doesn't mean you shouldn't embrace the subsystems they are built on in order to increase security resilience and defense in depth.


Everything never comes from a single vendor. Even microcontrollers the hardware abstraction layer is hiding peripheral support packages for all the i2c and spi things it talks to, each from a different vendor potentially.

I'm not saying we should be running k8 on an 8051 or even a cortex m33, but on an arm7? Maybe.

Cult of Ferris time, static linking in rust means your binary is your container, particularly if you statically link in musl.


Playing Devil's advocate here, but there are some good reasons you might do this. For example, Docker's "pull" system is great for updates, and means it's trivial to rollback to an earlier version if something went wrong. A Docker registry also means you can easily switch to another version when needed. You also get a supervision of containers, with automatic restarts (yes, I know you can do this nowadays with systemd :)


Sure, but if you provide the possibility to deploy a lot of components separately, at some point they will run in different versions. And who knows if rear view camera 1.3.22-44 works with turn signal 4.86.233-stable and break pedal 0.6.9876-beta?


Those 3 for the most part don't have to work together. They need to run on the same CPU without taking more than their allowed portion of the CPU.

If they have to work together the communication protocol is clearly defined well in advanced and limited to exactly what they need to say. Thus we are reasonably sure if any one combination works all possible combinations will work. Even then there is typically higher level control to only release combinations that are tested to work together.


That sounds good in theory. The reality I live in, is different though. If it works for you, that’s great!


How is that different from shared libraries?


Shared libraries are bundled into one container, then the one container is submitted to a gauntlet of tests (service / API-level tests, e2e tests, etc.).

If you're submitting an embedded device to a gauntlet of tests, you're anyway "containerizing" all the shared libraries when you build the embedded image. Trying to build containers within the embedded image has questionable merit.


It’s the same problem. But library authors usually take extra care to stay compatible.

And if you create your own shared libraries, they are normally not deployed separately, usually you bundle them with your main executable.


Because you can prove it. In the auto industry you can't just claim it conforms, you have to prove it does for all possible conditions. With containers enforcing that separation, you can prove it much more easily.


A statically compiled executable is even easier, and can be hosted on your webserver


But unfortunately they also become very space inefficient when a lot of processes need the same (relatively) large blocks of code. But if you carefully curate your containers to use the same base image that contains those libraries already, you don't have to duplicate it.

Comparable would be to having an inode de-duplicating file systems, and deterministic binary generation. But it's hard to prove "the correctness" of inode de-duplicating file systems in extreme environments like auto is required to, and deterministic binary generation is hard to control 3ven if it is possible with the specific build tools (it usually isn't).


Yup, I can imagine complete images are easier to version-manage and to tag as minor/major/hotfix than each individual part of the stack.

I think people underestimate the amount of software that is ALREADY running in their cars/airplanes/helicopters and even elevators.


"yum", "apt" etc have registries, roll back etc and have for a good few decades.


<sarcasm>But..but..they are not "cool" </sarcasm>

This forced application of new technologies into every possible ___domain can have real security and reliability consequences - and all because some VP somewhere decided they needed to use the latest shiny thing in cars or fridges or ACs or whatever.


Imagine we could build a dishwasher that didn't throw water on the floor?

Oh well, we got WiFi instead. That's fun, right?

If a dishwasher needs a firmware update, I might simply argue it was defective. Not everything needs to be secure or updated constantly. It shouldn't have network access to begin with.


Exactly. When will vendors stop touting WiFi access as a "feature"?


And when you upgrade ssl lib now 200 apps are not vulnerable to exploit, vs updating 200 different containers


unless the update requires version updates on multiple services which means the versioning and rollback has the same effect as a more monolithic codebase. but with the added complexity of not knowing how it all impacts each other.

In my experience updates to smaller services are often trivial, or for updates that are actually impactful it would be way easier to coordinate in a monolithic codebase.


yes my first job over 15 years ago was essentially micro services, and we ran into this type of problem..

trivial updates to individual services could be iterated extremely quickly

systematic changes to behavior across services were so hard they became incredibly uncommon


Not everything is developed by the single vendor. Sometimes they buy a program from a vendor they don't fully trust.

There is also security, I don't care if someone hacks my radio system nearly as much if someone hacks the brake system. Containers is one part of the total package to isolate parts so if there is a hack the whole system isn't taken out.

Computers are a large system. Someone in "the other group" making a mistake can bring your part of the system down. Much of the code is written in C or C++, and so one "old school" programmer can make a mistake and write past their memory into a data structure you are using, and the bug report goes to you not them.

If you have the above system, when splitting the monolith apart you will discover libfoo.so that both depend on, and the two groups will want to upgrade separately: containers allow this to happen, without modifying your build system to give them different versions or otherwise allow two different libraries with the same name to sit on your system.

The above is what is obvious enough that I can talk about it. (I work on an embedded system at John Deere so I cannot comment more than the above even if you ask.)


https://www.toradex.com/torizon they do and it works quite well. Super handy to have CI/CD and easy OTA updates with containers.


Especially a thing that should be just "a device that puts video stream on whatever bus it uses". infotainment already uses html/js based UIs, just embed video player in there...


> we did was analyze what happens when Podman starts a container and why it takes so long. It turns out there was a lot of low-hanging fruit.

I've done this analysis for lots of software before, Windows has a really nice tool called Process Monitor that I've used to find huge slow downs before. Point it at a process and it'll tell you the every bit of IO that the application does, and at that point you can just start digging and opening bugs.

IMHO almost every piece of software of any significant size horribly misbehaves. Opening the same file again and again, writing out lots of tiny temp files, reading the same key from the registry (or some config file) over and over again , and worst of all, using fixed timeouts waiting for a task to complete instead of doing the work to make things properly event based.

On that last note, timeouts instead of event callbacks (or some sort of signaling system) is the cause of so much slowness across our entire industry. If something you are doing on a computer takes a human noticable amount of time, and you aren't throwing around tens of GBs of data or waiting on a slow network, then you are flat out doing something wrong.


Why does a car even need to run containers? It's a known hardware, so why containers? Feels like lazy engineering.


I don't find the idea absurd, especially for the entertainment system:

* it makes it easy to separately update different apps that are shown on the same screen

* it unifies the update process

* you can download the update in background while the app is running, easily roll back to an old version

* it's familiar technology to many engineers (you could call that lazy, but it also reduces risk)

* it's easier to use an existing networking implementation than having separate chips for each task, and then having to connect them through busses

* allows for pretty good resource sharing (RAM/CPU)

* pretty good isolation out of the box

I wouldn't want that for my engine controls, but navigation/radio/camera/climate control, why not?


What of all the security and authenticity efforts? And the idea that you include the runtime in your delivery, you develop against something like red hat enterprise 8’s glibc and some other libs and that’s what you test and that’s what you ship, it’s all in a container. If you put an sbom on there that the car maker verifies, that seems good. Resource constraints per component seem good.


Isolation berween apps? Although not sure what that would buy over just having separate UIDs.


It buys you conformance to Conway's Law. The team building the media center is that much more certain that the climate control code is fully isolated, up to and including the ability to have their own fully isolated filesystem so updating a single library won't take anything else down (and updating a single library doesn't require buy-in from everybody who works on the car), and that they only communicate exactly and only on the published API specs and not via dropping undocumented files on the file system or other such things. (Or if they do, you have a place to see that they have a weird bind mount they really shouldn't, etc.)

I wouldn't consider this a night & day change, but an incremental one. But a good incremental one overall; I wouldn't drop everything to implement this but I'd definitely see it as a good thing even in the absence of functionality improvements. There's other benefits too like being able to update just one container in case of some problem, and having the blast radius more thoroughly contained than it would be with everything installed into one big base system.


I'm not saying that containers are useless and it is more sealed than just separate UIDs, but in theory you could have all those benefits with separate paths for separate users too. Didn't Android also use UIDs for app level separation?


The problem with that approach is that as you scale up, it becomes hard to be sure you're isolated. And that lack of clarity in the human comprehension turns into technical ways in which it will turn out that you're not isolated after all, for instance my example of a base system library upgrade that one team does and breaks another team.

You can in theory fully isolate everything between teams, but without technical barriers preventing you from crossing, you will eventually cross.

Plus you have the problem that while full isolation will benefit your project and your company three to five years from now, violating the isolation benefits the company now. Every monolithic bit of software in the world could have been split, but there are real reasons why it wasn't, and they don't go away because someone observes that it could have been done a different way.

Isolated containers, by providing a technical barrier, allow the teams to be sure that they are both isolated from other teams breaking them, and breaking other teams, with things like library upgrades. It's a significant change.

It is productive to consider the difference in the Android world, but I would submit the isolation works in another dimension there, by virtue of the various apps being by necessity utterly isolated in Conway's Law terms. Within a single corporate entity there are many more temptations to get short-term wins by violating the barriers that theoretically should be there.


easier done with dedicated Controllers instead of one BIG controller that needs to containerise its software? Why does the rear camera and lights need to use the same controller as the Engine sensors? This way you even avoid the latest "CAN bus injection attack" that are using the lights connection to inject Key Crypto attacks. not everything needs to be integrated.


> easier done with dedicated Controllers instead of one BIG controller

This bring costs and supply chain issues, and we had plenty of supply chain issues earlier this year.


if it runs linux already using containers has no overhead with a lot of pros. Security just being one.

When you ask, why containers?, reask, why sandboxes? if security is important at all, then you have your answer.

Containers are so convenient people forget we used to use chroot jails.


This should be part of the "How to Overengineer Anything and Everything" series.


podman rootless and startup speed was what lured me in, sadly after a couple of years I've switched back to docker, bit happily in rootless mode now too.

podman works fine until it doesn't. My hypothesis is that it has some fundamental design philosophy that makes it brittle. Properly cleaning up doesn't exist in their vocabulary.

For example, a cancelled download or image extraction can bring the whole thing down at the worst time, you have to hunt down the corrupted folder and remove so that anything works again.

A failed compose startup can leave the network in a undefined state hard to diagnose and impossible to recover without wiping some folders within /run/user and killing some rogue processes manually.

This is further cemented by the fact that a lot of minor issues are answered with: podman system reset, which reeks of rm -fr node_modules.

docker was always a pleasure to work with, I still don't understand why I suffered with podman so long.


That's pretty much my experience! I've tried switching to podman a few times now and I really wanted to like it. Each time lasting a few months, and it always ends with frustration. At this point it's like btrfs for me. Perhaps it has improved, but I've been burned too many times, the trust is gone and I just will not go back. Some software just seems to have fundamental design issues (too fast, too early?), and when that's the case, more often than not it doesn't matter how many years of development go into it, it will always have problems.

Docker isn't perfect. I wish they would put more development into rootless mode. But it has never given me the kind of issues podman has. It just does what I ask it to and gets out of the way.


I'm happy to see improvements like this.

In 2018 I opened a github issue around container startup time[0] with Docker. A couple of things have changed since that issue but generally speaking we are talking about ~5s (containers) vs 150ms (no containers) to start a realistic process that depends on networking and other things you'd expect in a typical web app.

[0]: https://github.com/moby/moby/issues/38077


Who said AI is killing dev jobs, now Devs have an alternative employment as a car mechanic. Drop kubernetes or VMs into the car and DevOps guys can also join. It would be so fun to hear "Umm your ingress seems to use older API, I have to update it for the gearbox to engage" and then see them run kubectl apply.


I am very likely one of only a few people but it really irritates me when the term "fold" is used when they really mean "times".

Folding a piece of paper (just like binary numbering) 6 times will provide you with a stack of 64 sheets.

They did not have a performance increase of 64 times.

This is identical to the idea of stating "magnitude" as being the number of times based on 10.

How wrong am I?


Haha I like that idea, but it's your own personal definition. "-fold" is multiplicative and it's used to flexibly change the part of speech since at least Old English [1]. You can think of folds as referring to the individual layers of something folded, rather than the action of folding. It can be either depending on context. I can make 64 folds (layers) by making 6 folds (actions).

[1] https://www.etymonline.com/word/-fold


All this because nobody wants to change the ELF loader to better isolate dependencies of binaries


I think it's more because Linux OS developers have never bothered to move away from Unix's "all apps get mixed together at absolute locations" filesystem model.

If apps were like on Mac - self-contained directories that can be installed at any path - then Docker would probably be a footnote.


> mixed together at absolute locations

There is DT_RUNPATH probably since before I was born. The problem is it's not always utilised, distributions prefer to share libraries over isolating applications, and loading shared libraries isn't the only host-dependent thing done by application code.

Also you realise that docker provides more functionality than a tarball, right?


DT_RUNPATH is not nearly flexible enough to matter


Can you elaborate what more function you were expecting?

There's appImage and a variety of home directory package managers on *NIX platforms. None of them caught up.


I'm not really expecting anything, it's just my experience developing commercial desktop applications on Linux that you inevitably end up having a startup script that sets LD_LIBRARY path before the main process starts. And even then global symbols with the same name collide so you have to be really careful about what gets loaded into the process.


Yeah I'm not saying that you can't do it, just that nobody does do it.


Ironically, Linux probably mostly does this to save disk space (and probably also to save RAM in the early days). And now with docker you download hundreds of MB only to install a small python script ...


I didn't know they used containers in cars. I guess it makes sense when you think about it but it always felt like more of an "enterprise" solution to me.


AFAIK, Red Hat is working collaboratively with General Motors [0]. There is one article [1] on their blog regarding the containers in cars. I haven't found many technical information in it, but maybe you will find anything pleasing your eye.

[0]: https://www.redhat.com/en/about/press-releases/red-hat-and-g...

[1]: https://www.redhat.com/en/blog/running-containers-cars


Why would you run podman inside a vehicle's computer. Cool nonetheless.


Perhaps you want to somehow isolate different parts of the car to protect against somehting like the CAN bus attack?


Containers do isolation in userspace, its a pure software thing. Its not doing any kind of hardware isolation nor is it able to.


Don’t they use cgroups? It’s software but the kernel helps.


Well you can ignore all sorts of Compartmentalization and run everything in the same cgroup, same chroot, same user, like how it is on conventional x86_64/aarch64 computers. It just isn't safe.


Adding a `sleep $DELAY` to a startup script isn't something to brag about :-P. scnr


They removed it.


Do we know which cars specifically run their applications in containers?


agreed, id like to know so I can avoid buying them


Do we know which cars specifically run their applications without any sort of containers, just everything thrown in the same userland without any care for security?


If the security of it is a concern, then it's already doing too much.


So it was poorly thought out or poorly designed all this time, then they fixed the obvious errors? That was my impression after reading the article...


makes you wonder what the aspects other than startup time probably look like?


i don't know if they fixed _all_ the errors -- there's still apparently a bunch of containers running in somebody's car...


Nice to see improvements in podman startup and how the team achieve it...

Constraints (car env) drive creativity !


Can podman be used instead of docker on Windows for minkube? Would it work faster than docker in such case?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: