Hacker News new | past | comments | ask | show | jobs | submit login

Also not mentioned, hardware pyramid makes fools of people who just expect the O-Notation to be the outcome of measured performance. A cache optimal - Algorithm thats worser in the 0-Notation might still outperform a bad implementation of a O-Notation superior algorithm.

TL,DR; Knowing Big-O does not detach you from physics.




This is an important consideration in practice, and I think it would be helpful if classes teaching O-notation would include an exercise that makes this point. It is also true, however, that O-notation considerations always dominate for large enough problems.

Anyone expecting the O-Notation to give the outcome of measured performance has made a category error, as it is never about run-time itself, it is about the change of run-time with problem size (and only in the asymptopic limit).


Nor the realities of various languages. I've seen hashes break down exponentially with "big data". (inb4 "then it wasn't a 'true' hash). There are countless variables beyond "+ x"

If it is important, then you test/profile it. Otherwise it isn't important, and this is mostly theory crafting.


And a lot of real-world implementations will use the asymptotically-inferior algorithm when the (sub)problem is small, for exactly this reason.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: