Hacker News new | past | comments | ask | show | jobs | submit login

From the looks of it, it seems Khrono's API may actually be significant better/easier to use than DirectX?

I haven't heard of DX12 getting overhauled for efficient multi-threading or great multi-GPU support. DX12 probably brings many of the same improvements Mantle brought, but Vulkan seems to go quite a bit beyond that. Also, I assume DX12 will be stuck with some less than pleasant DX10/DX11 legacy code as well.




It sounds like Vulkan is not going to be easy to use. If anything, it is going to be harder to use.

For example, in OpenGL, you can upload a texture with glTexImage2D(), then draw with glDrawElements(), then delete it glDeleteTextures(). The draw command won't be complete yet, but the driver will free the memory once it's no longer being used.

It sounds like with Vulkan, you'll need to allocate GPU memory for your texture, load the memory and convert your data into the right format, submit your draw commands, and then you'll need to WAIT until the draw commands complete before you can deallocate the texture. At every step you're doing the things that used to be automatic. So it's harder to use, but you're dealing with more of the real complexity from the nature of programming a GPU and less artificial complexity created by the API.


This is already how we do it on most consoles. The CPU has to sync with the GPU to know when the resources associated with draw commands are safe to release. Having something like glTexImage2D is way too high level for these graphics APIs and would be a luxury. Instead we get a plain memory buffer and convert manually to the internal pixel format.

There is no waiting to free resources however, unless either the CPU or GPU is starving for work. We have a triple-buffering setup and on consoles you also get to create your own front/back/middle surfaces as well as implement your own buffer swap routine. This provides a sync point where we can mark the resources as safe to release.

It's definitely more complexity on the engine part, but as mentioned in the forum post it makes everything much, much easier when you get to debug and tune things. Also having to implement (or maintain) all of that engine infrastructure gives us a better perspective into how the hardware works and how to optimize for it.

However, even with Vulkan or DirectX12 I doubt NVidia or AMD will expose their hardware internals publicly which is critical in optimizing shader code. On consoles we get profilers able to show metrics from all of the GPU's internal pipeline. It makes it easy to spot why your shader is running slow without having to send your source code to the driver vendor.


Do you think -- since the XBox One and PS4 both use an AMD GPU -- that the developer tools will improve on Desktop PCs?


Hard to say, they are very different tools aimed at different audiences. Console SDKs are behind huge paywalls and all their tools and documentation are confidential. The developer networks even go as far as checking if the requesting IP address is whitelisted.

I haven't had to profile an AAA title for the desktop so far and therefore don't know much about the state of tools there. However, I heard only good things about Intel's GPA.


I wonder if we'll start to see lightweight Vulkan (or DX12) wrapper libraries that attempt to restore some of the convenience of old fashioned OpenGL, sans legacy baggage, without the complexity and opinionatedness of full-fledged game engines.


There is some interest from LunarG to reimplement OpenGL in Mesa on top of Vulcan eventually. So yes. Its the same way Gallium3d can implement Direct3d and OpenGL against one intermediary representation (TSGI) and backend hardware interface (winsys).


Coming from a very CUDA-heavy background, having much more low-level access to the GPU and being able to (having to) manage memory manually are very familiar/welcome features. :)


Honestly for well designed modern-ish renderers this sort of thing tends not to be as big of a deal as it might sound.

In general most of the additional steps are things people already do. e.g. you shouldn't be deleting textures that are in use anyway because not all drivers have always handled that well, etc, managing the equivalent of a command buffer is common, even when the actual command buffer isn't something you have access to...


DX12 is actually a complete zero-legacy overhaul, just like Vulkan. It's designed with the same philosophy of tight, explicit control over how the GPU spends its time, and thin drivers - and from the information we have so far, it seems to have a quite similar design.


DX11 already had a way better multithreading story than current-gen OpenGL (at present you can't even reliably do your swapbuffers on another thread - I was told this by a driver developer from one of the vendors about a year ago.)

It's possible Khronos will leapfrog DX12 with Vulkan in regards to threading but I find it highly unlikely. We'll know when one of the two actually publishes documentation (likely not soon, based on how long it took for Mantle to become available to the public)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: