Hacker News new | past | comments | ask | show | jobs | submit login

> I bet a lot of people 10-15 years older than you would say the same thing - except they'd say it about you and your generation.

And they’d probably be right!

I remember the grognards giving me shit about memory management and me giving it right back by explaining that what they considered a large chunk of memory would be worth pennys next year because of Moore’s law and I wasn’t going to waste time considering something that I literally couldn’t learn faster than it became obsolete knowledge.

Quantitative differences can create qualitative differences and I don’t think it’s surprising that we’re in a different age of software engineering than we were 10-15 years ago for any given X year




>I remember the grognards giving me shit about memory management and me giving it right back by explaining that what they considered a large chunk of memory would be worth pennys next year because of Moore’s law and I wasn’t going to waste time considering something that I literally couldn’t learn faster than it became obsolete knowledge.

And that's why all applications are laggy as shit these days.


You both are right.

But ignore memory at your peril. I have one proj that has a 256GB instance. For a fairly boring CRUD app. I am asking a lot of questions as apparently we are having the yearly 'we need more memory' questions. Things that are leading to speedups. Just by using less memory. At the bottom of that stack is a L1 cache with less than a hundred KB. It doesnt matter right up until it does. I have seen huge 300+ item string classes that needed maybe 10 of the fields. They threw it in 'just because there is enough'. Yet something has to fill in those fields. Something has to generate all the code for those fields. The memory mangers have to keep track of all of that junk. Oh and all of that is in a pipeline of a cascade of applications so that 300+ class is copied 10 times. Plus the cost to keep it on disk and shove it thru the network.


On the other hand, I've seen developers who don't know about things like that start up a project on a small instance and wonder why everything is running at turtle speed.

People that stopped running tests because they were configured to make 10,000 API calls in one minutes and it crippled the app until everything was restarted.

"Add some more memory to your database instance....poof"


I definitely agree with the greybeards and I think we see the results of not listening to them. We have these processors, buses, networks, and all sorts that are magnitudes faster and more powerful than what they began on but many things are quite slow today. Worse, it seems to but getting slower. There is a lot of value in learning about things like caching and memory management. A lot of monetary value. It's amazing to me that these days your average undergraduate isn't coming out of a computer science degree being well versed and comfortable writing parallelized code, given that is how the hardware has moved. It is amazing to me we don't normalize caching considering a big change that was driven from the mobile computing side and adopted into our desktop and laptop environments is to fill ram because you might as well. It is crazy to me that we have these games that cost hundreds of millions of dollars to develop that are buggy as shit, hog all the resources of your machine, and can barely run at 4k60. Where you can hit a bug and go "yep, I know what's causing that memory error"

Honestly, I think so much of this comes from the belief of needing to move fast because. Because why? That would require direction. I think the money motivation is motivating speed but we've lost a lot of vision. Moving fast is great for learning but when you break things you got to clean it up. The problem is that once these tech giants formed they continued to act like a scrappy developer. To not go back and fix all the mess because we gotta go fast, we gotta go forward. But with no real vision forward. And you can't have that vision unless you understand the failures. We have so many low hanging fruits that I can't figure out why they aren't being solved. From deduplicating calendar entries, automatically turning off captioning on videos when a video has embedded captioning so you don't just overlay text on top of text, searching email, or even setting defaults to entry fields based on the browser data (e.g. if you ask for user's country, put the one the browser is telling you at the top of the fucking list!). These are all things I think you would think about if you were working in a space where you needed to consider optimization, if you were resource constrained. But we don't and so we let it slide. But the issue is a death by a thousand cuts. It isn't so bad in a few cases but these things add up. And the great irony of it all is that scale is what has made tech so powerful and wealthy in the first place. But no one stops to ask if we're also scaling up shit. If you printing gold but 1% of your gold is shit, you're still making a ton of shit. The little things matter because the little things add up. You're forced to deal with that when you think about memory management but now we just don't


As the base reality of computers and the inflated reality of software have diverged more and more, education and culture has tracked the software story and led to runaway irresponsibility. Forget not optimizing for performance; I think a lot of software today straight up fails to actually serve users some way or another. And those are paying users at that!


I agree. There are just too many obvious low hanging fruits. So I'm just trying to inspire people to take action and fix stuff. Ask not for permission, just fix it. Ask for forgiveness later.


As a fun anecdote I think this same rationale - ”next years hardware is so much better” - is why so many desktop softwares 90’s->00’s became slow - ”meh you don’t have to care about performance, next year’s cpu is going to be so much faster anyway”.

Then suddenly single threaded speedups didn’t happen anymore (and people realized even though cpu speeds had grown, it was not directly related to Moore’s law).

Ofc your rationale used Moore’s law correctly while the ”cpu infinite speed growth rah rah rah” peoples didn’t.



Seeing this multiple times in a day for multiple, articulable reasons in the mid 2010s if my memory doesnt fail me

>FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory > 1: node::Abort() [/usr/bin/node]

Is what made me decide it might finally be worth the time to actually learn how memory worked




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: