Hacker News new | past | comments | ask | show | jobs | submit login

As mentioned by others, this list is old. The cited algorithms are certainly still good to know, but the meaning of massive is different now. Today, massive means:

  - too large to fit even in big iron (few people can afford them anyway)
  - low value: a lot of data are useless / too bad to be useful, so not taking into account all of them all the time is not too bad.
Nothing outside near linear or even sublinear algorithms really work in those cases. Singular Value Decomposition is a great example. Up to recently, it was mostly about about doing fast, accurate SVD for large matrices. There is a recent surge on approximate algorithms which see any data only once at most. This is useless for most "hard" engineering tasks, but for analysis of large graph data, you can most likely tolerate a few % of error in your biggest singular values to still get something useful.

The fun part is that things as simple as matrix multiplication become an interesting and potentially hard problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: