Calm and civil discorse about update 31-07


Pimax have chopstick fights next week


I think what we need to understand.

There are advanced scalers as found in Monitors & TVs. The scaler used with mobile displays in this case are basic in comparison & thus a lower Resolution needs more super sampling to make native panel resolution look good.

Think of routers you can buy a basic say 1900ac router; it will perform decently but not as good as the router featuring a dual core processor.


I don’t get it. But this sounds like a racist comment… :angry:


I don’t know that I would speak of advanced scalers in monitors and TVs; On the contrary, they are rather infamous, and leave some people going out of their way to keep running in native resolution, including tolerating black borders.

Anyway; As far as I recall, any previous pointing things out, regarding the quality of realtime scaling in devices, has consistently met voices trying to diminish the point, up to, in stark contrast with today, saying it’s not going to have any impact whatsoever – the words that were needed back in those days, to keep the stormcrows from descending.

From a message consistency standpoint, it certainly leaves the old claim, that the 8k would be pretty much indistinguishable from the 8kX, dangling by a very thin thread.

I am sure that what we have will be perfectly “good enough” for what it is, but would have preferred straightforward communication throughout. :7


If by “some stuff you see above” you meant my explanation to @D3Pixel , you are right as I was mixing up the pre-lens warp (barrel distortion) and actual lens deformation (pincushion distortion) in my mental model. I made a correction to the post, it should not change the outcome, but pretty much reversed the “zoning” :blush:.


Let’s hope I didn’t make you change from right to wrong, then. :stuck_out_tongue:


Okay fine technical explanation from someone else.

Personally my 4k TV looks great with SDtv input & 1080p input(from computer)

Here from pimax reddit.

& thr pimax 4k looks fine with 1080p or qhd upscaled to uhd with SS.

Simple truth it will depend on the eye of the beholder. General Populous will not see a difference. But like you said some will.

The upscale is not a large one anyway.


As I said: I am sure it will be perfectly good enough.

As for the technical matter: Yes, VR necessitates non-rectilinear resampling, as has, as it happens, been mentioned in the last few posts above (it is still done using linear math, by dividing the image into a discrete grid, much in the same manner a round object in a game is approximated using a polygon, but that’s neither here nor there), but that is what the compositor does, when it compensates for the distortions of the lens. The on-HMD upscaling is an additional stage, after that, and is very much a 2D rectilinear jobbie.

It is not I, who right now claim the 8k needs more resources; It is Pimax. If I’m misinterpreting their words, maybe it would have been better not to make a statement at all, than making one that leaves too much to deduction.

(EDIT: As for how much one upscales and what that does: Upscaling by one pixel is a really nasty case, if one really, really, want to perfectly preserve proportions – each and every pixel needs to blend a bit with its neighbours, softening the image – not that such high sample rates are ever employed :7. Easiest and best results are when you can use integer scale factors: double, triple, etc, size (EDIT2: Our 1.5 scale factor should be a semi-easy case).)


@SweViver doesn’t steam by default ss a bit over native res of headset?

The 8k will need more resources to create a quality image at 2xQhd to upscale to 2xUHD. Vs the 5k which will just need a bit of SS to produce a quality image.

In the end we will have to see & decide for ourselves. To me trying blind eould be better than knowing. :wink:


Yes; We’ll see…

…but, before I shut up, something you just wrote compells me to say something about supersampling:

It is not just a matter of reducing jaggies on edges; Just like the word states, you are taking more samples from the world, picking denser: More detail goes, averaged, into each rendered pixel, and it is closer to geometrically correct; Not just for polygon edges, but also with textures; More so than you get out of the mipmapping, becase mipmapping is prequantised, and doesn’t care if you’ve moved your head a millimetre to the side - your current mipmap is what it is, and attains a “picture-like” sense, more than a “world” one.

There is a quality that I find hard to describe. when you have a healthy amount of supersampling, with our low-ish resolutions: The virtual world feels more “solid”, “stable”, “real”, and it is not only because there is less pixels popping in and out of existence, and crawling along polygon edges, because hand in hand with those things, you get everything not “jumping” from pixel to pixel, when you pan across the landscape, but moving smoothly instead – this, amazingly, even when dropping frames.

Now; If one think of the imagery as a brook of flowing water, the non-integer-scale-factor upscaling should to a degree come across as a grid of “standing waves”, that displaces the image, like the way a rock on the bed of the brook makes the moving water “bulge up” as it flows over it. This would disrupt that nice feeling of stabilty, that supersampling afforded us, producing a “shimmering” quality, kind of like projecting onto coarse fabric and moving the projector around. This is regardless of how much one supersamples, because it occurs after the entire supersampling procedure is performed and done with – better than with no supersampling, but then; Everything is. :7

Will love it, when I’m proven wrong. :7


Yes, the default setting on steamVR for the vive is 1512x1680 per eye (the actual resolution of the headset being 1080x1200 per eye) - this is what it calls “100%” in the supersampling setting. So the headset is around 2m pixels and it defaults to 2.5mp so about 1.25x SS

2560x1440 per eye is about 3.6mp and with 1.25 SS is about 4.5mp. Obviously the increased FOV is also going to affect performance too.

SteamVR also increases SS past 100% when it detects what GPU you have, not sure what it does for a 1070 but for a 1080ti it recommends what it calls 200% which is 5mp


Are you talking about the noise (or artifacts) that appear in a rendered frame that is also supersampled along with actual real data? It can create a kind of pixel crawl effect over a rendered surface which is distracting.


I take from the new GPU requirements that Brainwarp is either not implemented or did not give the perceived performance boost required. Given Pimax’s continued evasiveness in answering any questions on the subject, my expectations are very low.


No, just the fixed pattern in which (probably) interpolated samples from the smaller source image are placed onto the larger native res target one, shifting things around somewhat.
-Think some older games and demos, when realtime scaling of images was new and hot, and you saw columns and rows “unwrapping” themselves across the sprite, as it changed size, sampling nearest neighbour - just not nearly as dramatic, not nearest neighbour, and fixed in place and scale (per 2x2 pixels).

I am not sure which particular noise you are referring to, but kind of get the feeling you come from the high end of things, with ray/path tracing, especially when involving ambient occlusion (EDIT: …and/or global illumination)…

I actually asked for somewhat the opposite of getting rid of such noise, a bit further up in my post, when mentioning mipmaps, where I feel the mipmap bias might not allow me the little bit of aliasing that lets my persistence of vision pick up somewhat more detail temporally, than the physical resolution of the display panel can represent geometrically. :stuck_out_tongue:

(EDIT: Just throwing in something unrelated… Did somebody speak of 1.7 default supersampling for Rift? Is this official data? I was under the impression it would be lower than the Vive’s 1.4, on account of the distortion profile of CV1 necessitating less, which would be in line with people reporting slightly better performance with Rift, even before taking ATW etc into account…?)


I presume the effect describe is what I noticed ging from tube to lcd tv. The picture was quite clear but at times seemed to look like a little bit of light sand blowing across the screen.

Took a bit before the effect was so to speak used to it in those early lcd tv. Lol


Ah right yes. I am an animator (rendering a big job with VRay this weekend) so yes I come from a path tracing world. But I did not know that was evident from what I wrote lol.

The noise effect I was referring too is with pre-rendered animated textures, there is noise in there that is random from frame to frame. (We attempt to remove it using a denoiser pass, e.g. NVidia or Vray’s) Supersampling can make it more obvious. This is the Crux of the problem in the TV world, SD video when they try to scale it to HD enhances all the crap like compression artifacts, edge crawl, hallowing etc. So yes, I see your point of a softening pass which is what I imagine Samsung do at the edge detection part where they sharpen the softened pass.

Noise reduction which attempts a softening also softens the entire image and you get a kind of block crawl effect with that too so it needs to be more precise.

Samsung Q9S have tried to improve scaling with an AI engine that one day we can hope will appear in HMD’s.

Anyway, I’m waffling and probably going off in all sorts of directions here. Need to pass the time somehow haha.


I have no idea what that was lol. I loved my Iiyama 22", it was HUGE! haha, thanks for the nostalgia trip.


Anyone knows…? (20 cba)


If the rendering target resolution is 1512x1680 (~2.540 MPix) and the display resolution is 1080x1200 (~1.296 MPix) then the supersampling factor is ~1.96.


Whoops, i was doing a several calcs at once and grabbed the wrong figure