About 60-70% performance improved on RTX2080Ti


Sweviver did an entire video of performance with his GTX 1070 laptop and it worked fine.


Ok, I didn’t know that. The videos from him I saw were shot on overclocked 1080Ti and even this card struggled.


Performance is improved but by the card, not by the software; and not to a great extent.
SweViver’s 1080 Ti 2Gh should have 30% more performance, better than a 2080 Ti without overclock.


Yeah- I was very surprised myself. I was glad to see it, since I have a 1070 laptop. I figure it will buy me time until the 2080ti’s come down to a reasonable price. I like the fact that the headset will only get better.


I’m wondering if this is totally different or not. I would say the possibility to access a depth buffer (as opposed to video interpolation) is a bonus to avoid interpolation artifacts.

What makes me unsure the SVP-like interpolation could mostly be what ASW is doing is latency. As latency is very critical in VR, and frame interpolation like done by SVP would mean you need to delay the display of the stream (to be able to create the interpolated frame between last rendered frame and previous one), the resulting latency may be incompatible with VR.

So I’m wondering if ASW has to do with motion prediction to be able to reduce this latency induced by the frame interpolation. The problematic then may be similar to the motion prediction required for network gaming. If the game (gpu) was asked to render not the current state but t+1 state (= prediction of what the 3D image will be 1 frame ahead) then the interpolated frame could be added without adding latency.

What cannot be interpolated however is user inputs. Which means a frame interpolation method based on motion prediction as described above would still suffer from “some kind” of added input latency. However at 90fps being 1 frame late (for the displayed image) would mean 11ms added to input-to-output latency (= user input vs displayed image). If the initial display latency (without frame interpolation) is low enough maybe the input-to-output latency is still acceptable with 11ms more. Especially since the latency between user input vs game engine (physics etc) would remain lower.

I don’t know if what I describe here makes sense or not. I’m not sure what the result would be in term of perceived latency when all the above are mixed together. The question is: would the resulting perceived latency be similar to a game played on a TV with high latency ?

I tend to answer no. I expect the result would only be very brief overshots (or delays) due to unpredictable user inputs. I expect this latency would be the most visible for digital user inputs, but when the user input are analog the overshots/delays due to unpredictable user input may remain invisible to the user as it should be smoothed out by the analog nature of those inputs (= not on/off but progressively increased/decreased).

Well, I may be too tired while writing this, not sure my neurons are still connecting well :sleeping:
Feel free to react on what I’m describing here :innocent:


Nvidia on Turing multi view rendering:

MVR now supports rendering up to 4 views in a single pass
MVR supports different XYZW components of the vertex positions of each view
MVR supports the ability to set other generic attributes to be view-dependant apart from vertex position

That is in contrast to X only on Pascal. VK_NVX_multiview_per_view_attributes is a Vulkan extension that exists specifically to identify Nvidia drivers that allow only the X coordinate to vary between views. NV_stereo_view_rendering is an OpenGL extension that implements the same thing:

If a result variable binding matches “result.secondaryposition”, updates
to the “x” component of the result variable provide the “x” coordinate for
the position from the secondary view. The y, z, and w coordinates for the
secondary view position are expected to be the same as the primary
position and are taken from the “result.position”. Updates to y, z, and w
components of “result.secondaryposition” are ignored.

As for cards without these extensions, they’d simply render both views independently, which means running the geometry shaders twice. This isn’t necessarily a huge difference; shadow buffer rendering is independent, and fragment shaders were duplicated either way (but might not need to be - something to consider in the vein of the multiresolution shading).


many games didn’t work at all.


@PimaxVR I didn’t get details on this 60 and 72 Hz mod.
Is this 60/72 Hz mod just different frequency on old 1440p resolution or this will work as 4K native on 60/72Hz?


I guess the main difference between Smooth Video Project is that in SVP they do interpolation from two known images, but the re-projection in VR is effectively extrapolation of what the image might be if we take into account:
a) previous images
b) user input (which has to be also extrapolated)
Then applying some crazy transformation on the last rendered image, we get the extrapolated one.


That’s not possible any 2080 ti would bet any 1080 ti OC Or not.


“About 60-70% performance improved on RTX2080Ti” - Literally BS title.


It should, but it isn’t, as long as the average improvement is only 15%.

That’s why everyone is complaining about underperformance.
You have to do an overclock almost the same to have the same performance, and it also appears that overclocking capabilities have been blocked to a point by nVidia.

NVidia wants to sell all the 1080 Ti it has in its stores and cares little about Pimax plans. They want you to buy 1080 ti now and in six mounths or in a year the 2080 Ti.

Seriously, a 200% cost for a 15% gain?

That’s not to say that drivers improvements and new RTX technologies related to variable rendering detail add a few more FPS. Even more if Pimax uses their eye tracking module, but right now, it’s only a average of 15% improvement.


Yes, sure, we will do more test on AMD GPU and will share the test result.


Thank you for your answers. Since some times you are the only one who answered some questions . And we still have a lot of questions .
But I wanted to thank you for your time and your effort. It’s really appreciate .


What Amd gpus di you have available for testing?


Thank you for the response but I will not be holding my breath waiting for the results…

I’ve only been asking for this information for MONTHS…


Concern list

  • sharpness (may be better, but wait to see the image result).
  • wobble when turn the head (unknow, wait Sebastian to check with 2080ti if he can).


  • black level and contrast (some promise, but still unknow the progress).

If you can clarify these concern, may be it is easier to make the decision.


I am not saying that it is cost effective or even worth the money but I just replaced a Titan X Pascal (basically a 1080 TI with a bit more memory bandwith) with a 2080 TI. Both watercooled both overclocked and in direct comparison I am seeing massive gains with everything I throw at it. I do not know where you got the 15% but I can tell you this: It is not true.

It really shines in high res btw wich is nice considering the Pimax.

So far I only saw one bench of a 2080 Ti with a Pimax and that was I believe some flight sime that is singlethreaded CPU bound. Of course no GPU in the world will ever perform if the game engine bottlenecks the whole pipßleline through the CPU.


Absolutely correct, but also a real world scenario. If the game you care about is incapable of using the upgrade, the upgrade isn’t worth it to you.


Another advantage of water cooling is that is way more forgiving in a high ambient temperature environment. I live in Australia and during the summer months over clocking was painful until I started using custom water loops.