Turing Shader improvements - VR


Texture Space Shading essentially allows Turing to reuse previously computed and shaded pixels on a frame-by-frame basis, thus allowing for both improved image quality and performance. These pixels are dynamically saved as texels in a texture space. As such, if a given pixel was previously shaded in full resolution, saved as a texel, and is still rendered in the next frame, there is no need to go over it again—its texel result is simply loaded from the texture space. This saves processing grunt because only new texels have to be processed, and the GPU can look at adjacent texels and cut on workload by simply extrapolating results from already-saved compute results. This is of particular importance in VR scenarios, where two slightly differing images have to be computed for every frame. With TSS, there is no longer any need to compute two full scenes for each eye: developers can simply process the full-resolution image for the left eye and apply TSS to the right eye, rendering only new texels that can’t be reused from the left eye’s rendering results (because they are culled from vision, or obscured).


Do we know if this needs to be specifically enabled on the engine side? Because I painfully remember all those improvements Pascal was meant to bring to VR such as LMS et al. and in the end fuck all titles ended up using it.


I don’t get why they are not implementing it - use an engine like je unreal or unity and most of it is not to hard to get working… ( I admit I do miss in depths knowledge, but have tinkered with them both)


Problem is many of those features were implemented in separate branches off the main branch and in some cases those separate branches haven’t been updated the same way the main branch has.

This causes massive headache for devs who in turn stick with the main branch.