Hacker News new | past | comments | ask | show | jobs | submit login

I wouldn’t.

I am not convinced hashlife is still the best of them. The invention is from early 80-s. Many old heuristics about caching versus calculation is no longer true today. Here’s my article on the related topic, lookup tables: https://github.com/Const-me/LookupTables

If I wanted to implement efficient life simulator, I’d probably used the fact that with AVX2, it’s possible to count those neighbors for 8x8 block of pixels in a single __m256i register (with 8 neighbors max, 4 bits / value is more than enough). Might be faster than L3 cache access, and will definitely be much faster than main memory access.

That’s why I’d put those pixels in 8x8 blocks, single uint64_t value per block.

Putting those blocks in a hash table takes care about the infinite field. std::unordered_map will work, but if you care about performance prefer CAtlMap from Microsoft, dense_hash_map from Google, or something similar.




I believe most modern implementations, including hybrid implementations of HashLife, take advantage of register and cache sizes. HashLife as I might implement it in JavaScript, is really a very naïve approach. I think Bill Gosper even had versions in the 1970s that did things with registers and assembly, not just doing everything in lists.

But the big issue with HashLife is not whether you can write a faster implementation, it's whether the particular pattern(s) you are exploring are amenable to HashLife. It wins for highly repetitive patterns, but is much slower for patterns with a very large amount of entropy in time and space, like methuselahs.

HashLife can be fabulous for exploring purpose-built machines that cycle though regular states.


Looking at golly-2.8-src. At the first sight, it does not use SIMD. Only uses 32-bit registers in 32-bit builds, 64-bit registers in 64-bit builds.

All x86 CPUs manufactured after 2003 have at least 16 128-bit SIMD registers. Using them for calculations is the only way to get anywhere near advertised CPU performance.

For languages other than C and C++, like JavaScript, where there’s no SIMD instructions, calculations are generally slower, but the hash tables are provided by the platform and hence implemented in highly-optimized C code, HashLife should be perfectly fine.


For HashLife, I think the chief problems are around an efficient cache, and especially efficient canonicalization. The algorithm is so fast for universes that suit its ___domain that I think most other considerations drop by the wayside.

However, besides the fact that not all problems have enough redundancy in time+space to suit it, it also suffers from the fact that it is most efficient when computing massive jumps in generations. So it’s as not as performant for creating animations as it is for doing things like finding the maximum size of a pattern, calculating its growth rate, and so forth.


For JavaScript you might do quite well with a 32-way bit-slice implementation, like http://fanf.livejournal.com/93032.html That is, if you can't do it on the GPU with WebGL :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: