Calm and civil discorse about update 31-07


Oh Great!

We have another user who thinks technicians & technology should be viewed as Magicians & magic. :scream::joy::beers::sunglasses::+1::sparkles:


With Carmack being the Grand Wizard of the Blue Mountain who sees what no man can see


What do you think of the old adage “You do not really understand something unless you can explain it to your grandmother.”?


Oof, well good luck. Mine has severe dementia. She doesn’t even remember what a phone does.
I’d be very impressed indeed if you could teach her something :stuck_out_tongue:


Imagine you have one car with more HP than the other one, but the former turns out to be slower on the track. Now you ask the first car team why they were slower, and they say that was because more HP needs more time to be fully utilized.

This is not an explanation, because it does not explain, why it is like this. This is maybe a commentary, or just an external observation, when common knowledge is that the cars of the same category should rank according to their HP. Sure the devil is in the details, which are not said, neither in my example, nor in this Pimax case. This however should not be an excuse for Pimax for not explaining it.

I backed 5K so I actually am not affected by this “explanation” but if I was 8K backer, I would be hardly able to decide, based on the info provided by Pimax so far, if I really want to change my 8K pledge to 5K or not.


Just because that statement is true doesn’t mean Everyone needs grandmother-level explanations.


Pimel, you have an incorrect understanding of what the expression “laymen’s terms” defines. I could explain it to you…but the irony :smiley:

Cheers, it was quite educational, the youtuber did a great job.


the explanation was that the 8k needs a better input signal (like super-sampled 6k or 8k game rendering) to really show it’s potential because 5k rendered signals won’t do the trick. the 5k would be OK with a 5k rendered input signal.

anything beyond that - like the combined reasons for that reason - is just not for the joe average. That’s all I’m saying.

Simply don’t feel like being fooled, it’s absolutely technical reasonable.

Actually I backed the 5k too because I was skeptical about the scaling based on bad experiences in many ways, also doing wrong scaling algos in an own project some decade sth ago… :stuck_out_tongue_winking_eye:
But for a long term strategy technically it’s the right path, they just need to get it working in an acceptable way (aka use super sampled content) and improve from there…


You are right that in flat panel business the supersampling is a way to achieve anti-aliasing. I.e. rendering at higher res than the panel and then downsampling makes the picture “smoother”. I probably would not call it “sharpness”, but I guess this is just a different wording.

In HMD the supersampling has basically the same role except one important detail. The downsampling is no longer linear, because it also includes pre-lens warp transformation. This transformation is non-linear and causes some picture regions to be more downsampled than the others. The actual distribution of the downsampling “intensity” depends on the optical properties of the lenses and the FOV. Suffice to say that for Oculus they decided to use SS factor of 1.7 to achieve at worst 1:1 pixel mapping over the 90° FOV. Which means that there are actually part of the picture, which do not get any anti-aliasing treatment (in the center towards the rim of the FOV), as they are basically mapped one to one, while the other regions get higher downsampling/anti-aliasing than the original 1.7 SS would suggest (towards the rim in the center).

This is also the reason why the tools offer an option to the user to increase SS factor to either improve/apply the anti-aliasing over the distant areas from the optical center, or to decrease SS factor at the expense of introducing aliasing artifacts at the center FOV border.

As 8K (and 5K) uses larger FOV than OR or Vive, the common sense would suggest that SS factor should be also higher if one would want keep 1:1 pixel mapping in the center at the edge of the FOV. On the other hand, the display resolution (and subpixel arrangement) may help to mitigate it a bit and it is not clear how both the advantage and the disadvantage add up in the final product.

Pimax never acknowledged which SS factor they were using (even though they were asked many times about that during the KS and after) for pre-lens warp transformation. Which is unfortunate, because this is clearly one the most important factor which defines whether your GPU can actually handle it or not.

I guess we can only wait and see on this point.

This has been already addressed by others, so I would just say that the HW scaler will work on serial data as they are being received. It can wait for one or two scanlines to be ready before starting its work to minimize the latency. So technically I was not completely correct, as there are algorithms which can work just fine in this setup, i.e. while having the scaler having the complete info. I was just trying to point out that the algorithms will probably rather simple and will only work locally, never globally on the whole image.

I must admit I do not expect Brainwarp to help or to add anything to the (figurative) picture. From the original (even though a bit vague) explanation given by Pimax, I concluded that it would bring equal amount of problems as solutions and was not worthy. Unfortunately Pimax never elaborated on the technologie further.

Edit: Fixed the wrong explanations about the difference in downsampling in pre-lens warp transformation. Thanks to @jojon.


Thanks for that in-depth reply. There is a lot more going on than initially meets the eye (boom boom). I did not know it applied SS depending on the warp transform. I thought it just applied it to the entire image. Now I understand why there is a different base value for each HMD and it almost sounds like a fixed, pre-determined foveated supersampling effect.

Is it also true that SS is only partially applied to the second image. I am sure I read that it uses some kind of lookup from the completed left eye image to reduce the amount of work on the right eye image.


me too :wink:


Yes, it does. Thank you.


I’d be curious to see how the 8k looks using 1080 ti on the 170 degree FOV mode.


So has pimax already shipped M2 to all 10 testers @xunshu …? Or is M2 still in development?


I still do not understand why with same input res, same FOV and same lens the 8k should not be able to produce the same image as the 5k.
I can understand that it need a better SS to make differences with the 5k but why the onboarded SS that map input image to higher screen res could introduce artifact if SS is not at right level ?
And why shouldn’t some specific onboard basic SS mapping algorithm be setted to handle 8k input image computed with SS ratio of the 5k ? It is a better way to allow 8k users to limit its functionnalities to 5k behavior, in order to wait a more powerfull rig, instead of replacing it with a less costly and shorter term 5k.


all they said was, they were going to give people the option, options are good

if you have a 1070 or less GPU but plan on upgrading, then keep the 8K pledge, if you want to save some money and maybe get some extra accessories, ask them to switch it down to a 5K, its an option, they aren’t telling you if you have a 1070 then you are being restricted to having a 5K


Expect update on Friday, its chopstick new year next week


And the statement isn’t true. :slight_smile:


We-eell… There is, as far as my understanding goes, an amount of determined-by-lens-properties “base” oversampling, in order to fill in every available physical pixel in the centre of the image with detail, due to the amount that region is blown up by the software barrel type pre-distortion that counters the pincushion one of the lenses; I.e. It is done in order not to undersample the important bit right ahead of you.

…and since we can not normally render different parts of the viewplane at different resolutions, this means supersampling the entire image, and not just the bits that need it, which in turn means we are wasting a lot of rendering efforts on the periphery, where we end up greatly oversampling the pixels out there - way past the point of diminishing returns. (Some elements stretching out and compressing just about nullifies one another, but there are more of them in the one direction.)

So kind of the opposite of some stuff I see above.

I can also not help but to notice some of one of the worst kinds of suppression tactics rearing its ugly head around here, where you apparently don’t get to open your mouth unless you are a certified expert in a relevant field, I hope we can raise ourselves above appeal to authority argumentation, and such.


Chinese new year was in February, or am I lost here? :grin: