Hacker News new | past | comments | ask | show | jobs | submit login

That was a very long-winded non-answer, but I think I understood it to be essentially "Yes".

I'm definitely not an expert, but to me this memory model sounds like a more circumspect attempt to carve out a set of benign data races which we believe are OK. Now, perhaps it will work this time, but on each previous occasion it has failed, exactly as illustrated by Java.

Indeed the Sivaramakrishnan slides I'm looking at about this are almost eerily reminiscent of the optimism for Java's memory model when I was younger (and more optimistic myself). We'll provide programmers with this new model, which is simpler to reason about, and so the problem will evaporate.

Some experts, some of the time, were able to correctly reason about both the old and new models, too many programmers too often got even the new model wrong.

So that leads me to think Rust made the right choice here. Let's have a compiler diagnostic (from the borrow checker) when a programmer tries to introduce a data race, rather than contort ourselves trying to come up with a model in which we can believe any races are benign and can't really have any consequences we haven't reasoned about.

Of course unsafe Rust might truly benefit from nicer models anyway, they could hardly be worse than the existing C++11 model but that's a different story.




>I'm definitely not an expert,

I think that's something we can both agree on.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: