Hacker News new | past | comments | ask | show | jobs | submit login

The starting example was Bloom filters.



I'm not sure what the point of your post is. While bloom filters are heavily used in actual production code throughout the industry, it is very rare for anyone to need to code their own, or make changes to a prior legacy implementation.

Not all educational programs will cover bloom filters, and for those that do, there's no guarantee that the students will retain the information, and be able to recall it.

I don't know of any common interview question patterns where a bloom filter would be the only optimal solution. If it does share optimality with any other data structures, you would be better off choosing something else unless you are extremely confident in your ability to implement one and explain it clearly and in-depth, since the odds of your interviewer not being familiar with them are high in comparison to other solutions.

I only know what a bloom filter is because of HN, and previous submissions like this one that have made it to the front page throughout the years. It's nowhere near as common knowledge as tries, especially if you are sampling younger cohorts.


I just didn't think it was necessary to be judgemental/'gate-keep' about what's obscure or interesting 'enough' - better to let anyone share what's interested them and why.

To your points, personally I do fall in a 'younger cohort' I imagine; have come across bloom filters on HN way more frequently, and I dimly recall tries from university but I knew them as prefix trees. Without submissions like this one I wouldn't learn what a trie is/to make that alias in my head.


I was with you until the last seven words ;)

Trees were a huge part of CS practice and education historically, but have been replaced by hash-based methods in many cases. For example, in C++ std::map is generally a tree, while in a more recent language the standard Map data structure will be a hashmap. My impression is that the instruction time devoted to Bloom filters (and other hash-based methods) vs trees has shifted towards the former over the last ~20y.


C++ STL uses trees because the generic requirement for them to work is a < operator. Designing generics around hashing is much more difficult, and most languages struggle with that.

Ironically, due to caches, sorting and then using algorithms that rely on order tend to be superior than most hashing implementations, even though they are theoretically worse due to log(n) factor. So in a way, C++ algorithms are actually more modern.


> Ironically, due to caches, sorting and then using algorithms that rely on order tend to be superior than most hashing implementations

This doesn't match my experience at all. C++ trees are not cache-friendly; they're pointer-chasing (and there's no arena implementation in the STL). Second, any sort of ordering structure (be it through a tree or through sorting + binary search) is notoriously prone to branch mispredictions. They look great in a microbenchmark if you search for the same element all the time and the trees are small, but in a practical application, it can be incredibly painful. Pretty much by design, every choice is 50–50 and unpredictable.

std::unordered_map from most STLs isn't a fantastic hash table, but in my experience it absolutely destroys std::map (and is also comfortably ahead of something like std::lower_bound on a sorted array). All of this is assuming large-ish N, of course; for small N (e.g. below 30, usually), the best by far is a simple unsorted array and just going through it linearly.


> C++ trees are not cache-friendly

Agreed, they should use a B-tree to get cache locality and easy generics, but there is legacy code there.

I was referring to the performance of algorithms. For example `std::unique`, `std::lower_bounds`, etc. Many of these use sorted lists, whereas most other languages' standard libraries utilize hashing for these.

> is also comfortably ahead of something like std::lower_bound on a sorted array

I would be interested to learn more about when that's the case. But, it's also not very flexible. You can put an `int` in it, great. Can you put `std::pair<int, int>` in it? Does it work as well?


B-trees don't work well for CPU caches; 64b lines are typically too small to gain much. (OK, the M1 has 128b lines, but still.) And you still get the branch mispredicts, and a significant increase in code complexity, so hashing is still significantly better. (I've seen C++ projects where the main structure was a B+-tree because the lower levels could be residing on disk, but where they maintained a separate hash table in memory to skip the top first few B-tree levels!)

> I would be interested to learn more about when that's the case. But, it's also not very flexible. You can put an `int` in it, great. Can you put `std::pair<int, int>` in it? Does it work as well?

If by “it” you're talking about std::unordered_map (as key); yes, you can, but you'd have to supply your own hash function, because std::pair<A, B> does not have a std::hash specialization. (It's a bit sad, but the problem is highly specific to std::pair/std::tuple.) Likewise, you can use any arbitrary type you create yourself, as long as it has operator== and you've written a hash function for it.


With a modern Intel processor, it was much faster to go through a 10k integers array than sort it first and then do binary search.

The small N is not that small anymore, and the difference speed between stack and heap is bigger now than ever.


To your point, C++ got hash structures in the standard library (unordered_map and unordered_set) in C++11 (so only about a decade ago).


a lot of this is that other than heaps, most tree based algorithms involve O(logn) random access pointer lookups which make them relatively slow for in memory data structures


Trie != tree




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: