Indeed. I suppose the two lessons are, stick with C, and don't forget the semantics of your original problem when optimizing.
int run_switches(const char *s) {
int res = 0;
uint8_t tmp = 0;
size_t n = strlen(s);
for (size_t i = n & 127; i--; ++s)
tmp += (*s == 's');
res += tmp;
for (size_t j = n >> 7; j--;) {
tmp = 0;
for (size_t i = 128; i--; ++s)
tmp += (*s == 's');
res += tmp;
}
return 2 * res - n;
}
Which tricks in there are worth playing around with more widely?
Is the uint8_t just "no point in using something bigger" or does it likely help the compiler? Does/can the signedness matter as well as the size?
Ditto looping downwards -- is it often likely to improve things? Can it generalize to pointer/iterator ranges, or is it often worth trying to phrase them in terms of array/index accesses instead?
I guess the compiler's unrolling heuristics generally aren't as good as that blocking "mod then div" alternative to Duff's device? Obviously taking `s` out of the loop condition is part of the magic.
Not checking the 'p' character by comparison is an easy optimization to understand.
Any places to read about this sort of thing, or any tricks or guidelines that come to mind? I write a fair bit of performance-sensitive code but it's all probably 20x slower than it could be because I have no intuition for what transformations compilers will do beyond "this prob gets inlined" etc.
> I guess the compiler's unrolling heuristics generally aren't as good as that blocking "mod then div" alternative to Duff's device? Obviously taking `s` out of the loop condition is part of the magic.
The magic in this case is the compiler autovectorizer. Making the length of the loop a loop invariant allows the autovectorizer to kick in.
The reason "blocking" by accumulating on uint8_t helps further is that it allows the compiler to accumulate on 8 bit SIMD lanes, instead 32 bit SIMD lanes.
The same operation on 8 bit SIMD lanes will, to a first approximation, do x4 the work per cycle.
> Is the uint8_t just "no point in using something bigger" or does it likely help the compiler? Does/can the signedness matter as well as the size?
In a good world you could use just uint_fast8_t and compiler would optimize this question for you. In real world I don't think compilers are smart enough, or there are too many other constraints limiting them :(
Replying to my own post: The off by 1 error was incorrect. It's because I was calling the function wrong. I had been giving it the size of the buffer, not the size of the string.
Also, someone else figured out that we can just use an and instruction instead of cmp. That gives us this version:
#include <stddef.h>
#include <stdint.h>
int run_switches(const char *s, const size_t n) {
int res = 0;
uint8_t tmp = 0;
for (int i = n & 127; i--; ++s)
tmp += 1 & *s;
res += tmp;
for (int i = n >> 7; i--;) {
tmp = 0;
for (int j = 128; j--; ++s)
tmp += 1 & *s;
res += tmp;
}
return 2 * res - n;
}
This is 111GB/s, up from 4.5GB/s in the blog. I'm going to try really hard to put this problem down now and work on something more productive.
ANDs vs cmps seem to be a mixed bag. They are faster on my older Broadwell system (E5-2690V4 / 128GiB RAM) but they are actually consistently slower on my Rome system (AMD EPYC 7B12 / 512GiB RAM). Of course, neither Broadwells nor Romes have AVX512, so likely this is where you're getting the win from.
The code in question has to process a string of variable length.
But the compiler/CPU can process bytes one at a time or much faster in groups. The code is trying to process as much as possible in groups of 128.
But since the caller can pass in a string which is not a mulitple of 128 chars, the first for-loop (& 127) will figure out how much of the string to process such that the remaining string length is a multiple of 128.
The second for-loop (>> 7) calculates divides by 128 (>> 7) to find out how many multiples of 128 there are to process. The inner for-loop processes 128 chars looking for 's' chars.
Now the for-loop within a for-loop doesn't look any faster than the plain single for-loop, but I'd assume that the heuristics of certain compilers can intuit that it can generate code to operate on multiple chars at the same time (SIMD instructions), since the result of one operation are independent of others.
On a compiler that cannot generate SIMD code, the code won't be much faster, if at all, than the naive straightforward manner.
You're correct, it does not account for alignment.
The reason it helps performance is because it allows the compiler to accumulate in byte sized SIMD variables instead of int sized SIMD variables. My system has AVX-512 so 64 byte wide SIMD registers. With the non-blocking version, the compiler will load 16 chars into ints in a 64 byte ZMM register, then check if it's an 's', and then increment if so. With the blocked version, with the uint8_t tmp variable, the compiler will load 64 chars into uint8_ts in a 64 byte ZMM register instead. But there's a problem; we're gonna overflow the variables. So the compiler will stop every 128 iterations, and then move the 64 byte uint8_t accumulation variable into 4 64 byte int accumlations registers and sum them all up. Then do the next 128 iterations.
I'm pretty sure a similar thing will happen with SSE or AVX2 but I didn't check.
I think it's just reading unaligned. That's just a ~2x loss of throughput from L1, but the second the problem is large enough that the work being done doesn't reliably fit into the L1, that doesn't matter a bit anymore.
In general for x86, unaligned writes are worth doing work to avoid, but reads are in most situations not really an issue.
Bummer! Edited the answer. Not sure about the off-by-one though.
Say the string is str[] = "spp\0". n = strlen(str) is 3.
In the end, res would be 1 and 2 * res - n == -1.
Oh. Found it. It's because I wasn't using strlen and had been passing over the length of the buffer instead of the length of the string. Only my code had the off by 1.
This makes the assumption that the only characters in the string are "s" and "p". There is no basis for this assumption. I think this code solves a different problem rather than being an optimisation of the original code.
The original problem was working on strings that only hold 's' and 'p' characters, as seen in bench.c. The first implementation checked against 's' and 'p' specifically, and all subsequent version optimized that first version.