Hacker News new | past | comments | ask | show | jobs | submit login

That'd only really be true if a significant portion files were hardlinked. My root filesystem (Debian GNU/Linux testing) has 393,410 files, of which 825 have more than one link. My /home has 0 out of 156,273 files hardlinked.

Having 0.2% (or less) of files hardlinked shouldn't prevent storing files in the same directory near each other.




The point is that just the possibility of having more than one link kills your ability to assume that there is only one. It has nothing to do with whether or not they actually have more than one link.


Right, but this bumps into a spatial-locality analogue of Amdahl's law. If you optimise case that shows up 1% of the time, then you can only get a total gain of 1%.


No, the point is that you can't optimise the other 99% because that optimisation only works if hardlinks aren't allowed.


Why not? Choose parent at random and place the file near children of that parent. In 99% of cases it's the only parent, so you get the result you aimed for. Why would it be prevented by possible other names?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: