Hacker News new | past | comments | ask | show | jobs | submit login

Superscalar processors (which include all mainstream ones these days) do this within a single core, provided there are no data dependencies between the assignment statements. They have multiple arithmetic logic units, and they can start a second operation while the first is executing.

But yeah, I agree that we were promised a lot more automatic multithreading than we got. History has proven that we should be wary of any promises that depend on a Sufficiently Smart Compiler.






Eh, in this case not splitting them up to compute them in parallel is the smartest thing to do. Locking overhead alone is going to dwarf every other cost involved in that computation.

Yeah, I think the dream was more like, “The compiler looks at a map or filter operation and figures out whether it’s worth the overhead to parallelize it automatically.” And that turns out to be pretty hard, with potentially painful (and nondeterministic!) consequences for failure.

Maybe it would have been easier if CPU performance didn’t end up outstripping memory performance so much, or if cache coherency between cores weren’t so difficult.


I think it has shaken out the way it has, is because compile time optimizations to this extent require knowing runtime constraints/data at compile time. Which for non-trivial situations is impossible, as the code will be run with too many different types of input data, with too many different cache sizes, etc.

The CPU has better visibility into the actual runtime situation, so can do runtime optimization better.

In some ways, it’s like a bytecode/JVM type situation.


If we can write code to dispatch different code paths (like has been used for decades for SSE, later AVX support within one binary), then we can write code to parallelize large array execution based on heuristics. Not much different from busy spins falling back to sleep/other mechanisms when the fast path fails after ca. 100-1000 attempts to secure a lock.

For the trivial example of 2+2 like above, of course, this is a moot discussion. The commenter should've lead with a better example.


Sure, but it’s a rare situation (by code path) where it will beat the CPU’s auto optimization, eh?

And when that happens, almost always the developer knows it is that type of situation and will want to tune things themselves anyway.


What kind of CPU auto-optimization? Here specifically I envisioned a macro-level optimization, when an array is detected to have length on the order of thousands/tens of thousands. I guess some advanced sorting algorithms do extend their operation to multi-thread in such cases.

For CPU machine code it's the compilers doing the hard work of reordering code to allow ILP (instruction-level parallelism), eliminate false dependencies, inlining and vectorization; whatever else it takes to keep the pipeline filled and busy.

I'd love for the sentiment "the dev knows" to be true, but I think this is no longer the case. Maybe if you are in a low-level language AND have time to reason about it? Add to this the reserved smile when I see someone "benchmarking" their piece of code in a "for i to 100000" loop, without other considerations. Next, suppose a high-level language project: the most straightforward optimization to carry out for new code is to apply proper algorithms and fitting data structures. And I think this is too much to ask nowadays, because it takes time, effort, and knowledge of existence to remember to implement something.


Spawning threads or using a thread pool implicitly would be pretty bad - it would be difficult to reason about performance if the compiler was to make these choices for you.

I think you’re fixating on the very specific example. Imagine if instead of 2 + 2 it was multiplying arrays of large matrices. The compiler or runtime would be smart enough to figure out if it’s worth dispatching the parallelism or not for you. Basically auto vectorisation but for parallelism

Notably - in most cases, there is no way the compiler can know which of these scenarios are going to happen at compile time.

At runtime, the CPU can figure it out though, eh?


I mean, theoretically it's possible. A super basic example would be if the data is known at compile time, it could be auto-parallelized, e.g.

    int buf_size = 10000000;
    auto vec = make_large_array(buf_size);
    for (const auto& val : vec)
    {
        do_expensive_thing(val);
    }
this could clearly be parallelised. In a C++ world that doesn't exist, we can see that it's valid.

If I replace it with int buf_size = 10000000; cin >> buf_size; auto vec = make_large_array(buf_size); for (const auto& val : vec) { do_expensive_thing(val); }

the compiler could generate some code that looks like: if buf_size >= SOME_LARGE_THRESHOLD { DO_IN_PARALLEL } else { DO_SERIAL }

With some background logic for managing threads, etc. In a C++-style world where "control" is important it likely wouldn't fly, but if this was python...

    arr_size = 10000000
    buf = [None] * arr_size
    for x in buf:
        do_expensive_thing(x)
could be parallelised at compile time.

Which no one really does (data is generally provided at runtime). Which is why ‘super smart’ compilers kinda went no where eh?

I dunno. I was promised the same things when I started programming and it never materialised.

It doesn’t matter what people do or don’t do because this is a hypothetical feature of a hypothetical language that doesn’t exist.


huh?

You’re fixated on the very specific examples in our existing tools and saying that this wouldn’t work. Numpy could have a switch inside an operation decides whether to auto parallelise or not, for example. It’s possible but nobody is doing it. Maybe for good reasons, maybe for bad.

I’m doing no such thing. I’m providing an example of why verifiable industry trends and current technical state of the art are the way they are.

You providing examples of why it totally-doesn’t-need-to-be-that-way are rather tangential, aren’t they? Especially when they aren’t addressing the underlying point.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: