Hacker News new | past | comments | ask | show | jobs | submit login

> manages to fill about ~20GB on a regular basis

Unused memory is wasted memory, so makes sense to always have a lot in memory. Doesn't mean that you'd have a worse experience with 16GB.




"unused memory is wasted memory" is a meme, technically true from a narrow point of view, but leading to bloat and encouraging bad practices. A little bit of care could shave off orders of magnitude of memory use, as well as performance, which could ultimately allow for cheaper computers, sustainable use of legacy hardware and keeping performance reserve for actual use. In reality, I the idea of increased efficiency by using more memory ultimately leads to software requiring that memory that used to be optional, and software not playing nice with other programs that also need space. Of course even with the idea to have everything ready in memory, software is not generally snappy these days, neither in starting up and loading even from fast SSDs and during trivial UI tasks. Performance and efficiency is also generally not something that programmers regularly seem to consider the way real Mechanical-, Civil-, or Electrical Engineers would when designing systems.

I accept trade-offs concerning development effort and time-to-market, however the phrase "Unused memory is wasted memory" does not seem appropriate for a developer who's proud if their work.

Little friday rant, sorry :-)


No, unused memory should always be used as cache if it has no other use at the moment. It's wasted otherwise.


I think a lot of this comes down to semantics confusion for most people. Intuitively one would assume "unused" memory would be the inverse of "used" memory, with not everyone thinking what even counts as "used" or "unused" in the first place. In reality on macOS/Windows/Linux "used" memory is counted as a specific type of usage (e.g. processes/system/hardware), cached things are counted as cached, and there are multiple ways to refer to which "unused" portion you mean (e.g. free vs available) as well as anywhere between a half to several dozen ultra specific terms to break things up further with which probably don't matter in context.

Once you clear the semantics hurdle it's surprising how much people are in agreement that "used" should be optimised, "cached" should fill as much else as possible, and often having large amounts of "free" is generally a waste. The only remaining debate tends to center on how much cache really matters with a fast disk and what percentile of workload burst should you worry about how much "free" you have left after.


Is that generally how unused memory is used, and will this kind of "cache" be released if another application truly needs it to load actually vital things?


Yes, that’s the main job of the OS memory management.


Using memory doesn't have to be about badly written software though, there's many legitimate use cases for actually using your memory to make your experience better.


My comment has not suggested that there were no legitimate cases for using more memory.

It's too easy, and happening too often on HN these days, to reply with a low-effort contrarian statement without engaging with the central point of the argument.


I think a more accurate statement is that developer time is more expensive than RAM now


Developer time is more expensive to the company than the user’s ram is, of course.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: