That drawing doesn’t count for lens types. You could do all kinds of warping in a lens to use less for the outer edges and more for the center focus
That’s because it’s about the GPU rendering. The horisontal line represents the rectilinear frame buffer before lens warping is applied, which is the main factor in how heavy rendering is. In that model, each view is one straight line and not representative of how it looks in the readout buffer from GPU to panel. It is at this level that tricks like radial density masking, occlusion meshes, and variable resolution shading need to operate. The problem here is that GPUs are optimized for a single perspective divide operation for all geometry processing, a linear algebra solution for rectilinear projections. The dream on the other hand would be to trace rays for each panel subpixel and not need the lens warping pass at all.
Typical steps past this level include drawing each eye at a different angle, or splitting them into multiple slices, each time to reduce the angles between render plane normals and the view point. The engineer’s rule of thumb to apply is that sin x, tan x and x (in radians) are all about the same, given sufficiently small x - which sadly translates to low enough field of view.
Also worth mentioning texture space shading, while still in R&D stage, in addition of managing the shading rate via MIP it also allow to reuse most of the shading done for the left eye to be used for the right one. it allow great anti-aliasing and you can distribute the shaders within 3 frames without users noticing.
while reflection and sub surface scattering can’t be done like this and we still have to see how good is Nvidia acceleration on the overhead, on the paper at least texture space shading is probably the most exciting optimization coming for VR.
Referencing Doc-Ok was my thing. I guess I have to find another way to stand out now
You could quote Doc Oct.