Hacker News new | past | comments | ask | show | jobs | submit login

Exactly. This is why I like Rich Hickey's Simple Made Easy [1] so much. Basically with easy constructs it becomes harder to build simple systems, even though the simple constructs are harder to learn.

[1]: http://www.infoq.com/presentations/Simple-Made-Easy




I love this talk, and Rich Hickey's talks in general, but I think this goes beyond that.

At one point you want full control of the HW, much like you did with game consoles..

On the other you want security: This model must work in a sandboxed (os, process, vm, threads, sharing etc.) environment, along with security checks (oldest one that I remember was making sure vertex index buffers given to the driver/api must not reference invalid memory, something you would make sure is not the case for a console game through tests, but something that the driver/os/etc. must enforce and stop in a non-console game world - PC/OSX/Linux/etc.)

From little I've read on this API, it seems like security is in the hands of the developer, and there doesn't seem to be much OS protection, so most likely I'm missing something... but whatever protection is to be added, definitely would've not been needed in the console world.

Just a rant, I'm not a graphics programmer so it's easy to rant on topics you just scratched the surface...

----

(Not sure why I can't reply to jeremiep below), but thanks for the insight. I was only familiar with one I posted above (and that was back in 1999, back then if my memory serves me well, drawing primitives on Windows NT was slower than 95, because NT had to check all index buffers whether they were not referencing out-of-bounds, while nothing like this was on 98).


Security is actually much easier to implement on the GPU than on the CPU. For the simple reason that GPU code has to be pure in order to get this degree of parallelism. A shader is nothing more than a transform applied to inputs (attributes, uniforms and varyings) in order to give outputs (colors, depth, stencil).

Invalid data would simply cause a GPU task to fail while the other tasks happily continue to be executed. Since they are pure and don't interact with one another there is no need for process isolation or virtualization.

Basically, its easy to sandbox a GPU when the only data it contains are values (no pointers) and pure functions (no shared memory). Even with the simplified model the driver still everything it needs to enforce security.


You are describing GPU from 1990s. Modern GPU is essentially a general purpose computer sitting on the PCIe bus and able to do anything the CPU can. It does not have to run pure functions (e.g. see how it can be used for normal graphics tasks in [1]) and can write any ___location in the memory it can see. Securing it is as easy/hard as securing a CPU: if you screw up and expose some memory to the GPU it can be owned just like the memory exposed to a CPU task[2].

1. https://software.intel.com/en-us/blogs/2013/07/18/order-inde...

2. http://beta.ivc.no/wiki/index.php/Xbox_360_King_Kong_Shader_...


GPUs these days have MMUs and have address spaces allocated per context. It's implemented internally to the driver though so you don't see it. And it's normally mapped differently, but the point of AMD's HSA stuff is to make the CPU's and GPU's MMU match up.


(To anwser the lack of a reply button:)

This is just hn adding a delay until the reply link appears related to how deeply nested the comment is. The deeper the longer the delay. It's a simple but effective way to prevent flame wars and the likes.


yep, I love that talk and I find myself pointing people towards it all the time :)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: