Hacker News new | past | comments | ask | show | jobs | submit login

Duplicated memory and just overhead from running separate processes (Chrome), as opposed to shared memory and less overhead from a single process (IE? Old FF).

As for security, no, unless there's some unknown vulnerability now (or that they create), that will be ported over and somehow more effective between (potential) processes. So, I doubt it.




Chrome uses separate processes for sandboxing purposes, not just for the fun of it. So yes, protecting against "some unknown vulnerability" is the entire point behind it.


As a fun fact, separate virtual machines running under a VMWare ESXi hypervisor—which is about as "sandboxed" as you can get—still share memory: the hypervisor hashes the 'cold' pages of its VMs and, when it finds duplicate page content either within or between VMs, it merges the duplicated pages together into single copy-on-write pages. ESXi has never had a security vulnerability due to this optimization, AFAIK.

Ref: https://labs.vmware.com/vmtj/memory-overcommitment-in-the-es... (search "page sharing")


> ESXi has never had a security vulnerability due to this optimization, AFAIK.

Should be vulnerable to FFS, if I'm not mistaken: http://arstechnica.com/security/2016/08/new-attack-steals-pr...


True! Rowhammer-based attacks are kind of unique, though; I expect they'll be treated as a hardware bug and solved by releasing better hardware, rather than through inventing even more layers of securinoia to keep in mind from now on.

It's sort of like when WebGL was first getting going, and the GPUs of the time didn't expect to be fed shaders directly from potentially-malicious web sources. Rather than severely restricting the WebGL API, we got a new generation of GPUs that fail safe.


> I expect they'll be treated as a hardware bug and solved by releasing better hardware

When Rowhammer was first announced, people said "it's okay, we have ECC". Then ECC was shown vulnerable. "It's okay, vendors have promised to fix it in DDR4", they said. Now DDR4 is out and vendors have not deployed the fixes systematically[1] and you have to benchmark every single RAM stick to make sure you're not vulnerable.

I'd really appreciate software workarounds as long as hardware vendors keep fucking up.

[1]: http://www.passmark.com/forum/memtest86/5395-rowhammer-mitig...


> Duplicated memory and just overhead from running separate processes

fork(2) does COW. With careful design† forking a new process only results in new and modified pages being not shared.

† Hard! But single-process threads + security is hard too.

http://unix.stackexchange.com/questions/58145/how-does-copy-...


Only on Linux, POSIX does not mandate CoW and Windows does not have a similar API.


While not mandatory, I always sort of assumed BSD and Darwin do the same...

Just wanted to point out that memory usage isn't necessarily inherent to multiprocess, but with a sizable portion of users on Windows the point is practically moot anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: