Hacker News new | past | comments | ask | show | jobs | submit login

OpenCL and OpenGL are basically already scripting languages that you happen to type into a C compiler. The CUDA advantage was actually having meaningful types and compilation errors, without the intense boilerplate of Vulkan. But this is 100% a python-for-CUDA-C replacement on the GPU, for people who prefer a slightly different bracketing syntax.



> But this is 100% a python-for-CUDA-C replacement on the GPU

Ish. It's a Python maths library made by Nvidia, an eDSL and a collection of curated libraries. It's not significantly different than stuff like Numpy, Triton, etc..., apart from being made by Nvidia and bundled with their tools.


I’m mainly interested in the performance implications. The less shit between me and the hardware, theoretically the better the performance. In a world where these companies want to build nuclear power plants just to power NVIDIA GPU data centers, I feel like we need to be optimizing the code where possible.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: