Quality of upscaler on 8K?


It is because of adult beverages like Seb will say. My fingers get too lose. :wink:


I agree that Pimax would need to get an updated chip, but (given how primitive the scaler chip seems to be) there’s a good chance that it’s a “fixed function” chip. That is, there might be no firmware and that scaling is done via a hardware circuit.


oh this made laugh ffs


This sounds like really important information in terms of people who might think the sharpness of 8k might improve over time with software (and pick it over 5k), but it also strikes me as something the m2 testers might not be easily able to confirm as part of their reviews?


True and true. Imo, the sharpness of the 8K will not improve, as far as the headset is concerned. However, over time, more powerful GPUs will enable higher levels of supersampling to become feasible and THAT will improve the image quality. Basically, the 8K will always be a bit “fuzzy” like an actual photograph, but in time (with GPU upgrades), the fuzzy details will be of higher quality.

I’m not sure that the fuzziness will be “unacceptable”. When I run Elite Dangerous at 2560x1440 on my 4K LCD monitor, the image looks “fine” (although it looks “crisp” at 4K - 3840x2160).


All the above being true I think you may have made the call for me on 5k or 8k.

In the above situation it makes sense for me to get the 5k. The logic being i’m going to be replacing my headset roughly every 2 years going by the vive. Even more powerful gpus down the line improving the 8k’s image would be good. But by that time i would be switching off the 8k to whatever comes next, 8k x… 8kx plus… . And the image those headsets would produce with that same horsepower would i presume be a lot better due to native resolution. If i’m lucky, the pimax wireless for the 8k will work with the x as will the controllers… which would be a nice plus.

Maybe the video reviews will sway me. maybe the blurriness people reported was a bad set up. etc etc. But where i was not leaning in a direction before, i feel like i have a solid reason to do so now.

5k +1070 to 8kx +2180 seems like the path i will take.


If you imagine the scaler chip is something like a CPU with a program, you would need to process 2560x1440x90 pixels per second. This equals to ~330 Mops where one “op” is the complete scaling operation for one pixel. CPU of such power is something you would find in a phone or in a PC, so not very likely.

The other possibility is an FPGA, but the prices of those are out of the budget Pimax has for the complete headset.

So the only option is it is a “dumb hardwired” chip, which either has the flexibility or not. If it does, we would have already known about it, so it most likely does not. I would simply assume that the scaler chip is fixed and the only way to “improve” it in the future would be to replace it with something else (if possible at all).


This. Also, for something that requires as little latency as possible, the scalar has ASIC written all over it.


So I’m curious on a more technical knowledge of how the upscaler would work and whether the following would conceivably be possible once eye tracking is implemented:

We know that the 8K requires a 5K input which is then upscaled to the 8K by the chip in, I assume, a uniform distribution of averaging known neighbouring pixels over the unknown pixels.

Would it be conceivable with eye tracking to ‘throw away’ 2-4 pixels (for instance by setting the ‘x’ value of the focal point to the R value (depending on 'max value of R value might need for instance 256R1 + R2 if R only went to 256), the ‘y’ value to the G value?) to give the upscaler the ‘focal point’ (where the eyes are looking at), then build an image using 4K quality for 1/4 of the screen (centred on the focal point given by the 2 throwaway pixels), and 1080p quality for 3/4 of the screen to output an image where you have 4K quality output for a small area and having the other input pixels take up output 4 pixels, combining to a 24K image for <5K input.

Before we go any further, this would be foveated transmission not rendering, so to take advantage of the following would require 2*4K rendering by the graphics card.


A. Eye tracker generates screen (x,y) co-ordinates. These (x,y) co-ordinates get converted to (X,Y) where:
X = { 960 if x <= 960, x - mod(x,2)* if 960 <= x <= 2880, 2880 if x > 2880 },
Y = { 540 if y <= 540, y - mod(y,2)* if 540 <= y <= 1620, 1620 if y > 1620 }
*mod(A,B) is the modulo function, finding the (in this case) non-negative remainder of x (or y) when divided by 2 - so will be {0,1}.

B. Graphics card renders 24K images, first image is prepended with the 1-2 pixels (will suppose 2 pixels required) required to relay (X1, Y1), second image also prepended with 1-2 pixels (will suppose 2 pixels required) required to relay (X2, Y2). (total pixel count 23840*2160 + 4)

C. Pitool downsamples pixel-quads (2x2 pixel groups) such that the 2x2 group in question:

a b
c d into a single averaged pixel if:

(xb < X - 960, or xa > X + 960)
(yd < Y - 540 or yb > Y + 540) (i.e. it is not in the area defined as the UHD area)

D. Pixels sorted in some ‘countable’ manner, with the now reduced ‘downsampled’ pixels removed from the signal - cutting 3 out of every 4 pixels outside the UHD area. (For instance, could be in spiral manner going from (X,Y), if the above is technically possible then I could go into more depth.)
(Pixel count now:
2 * (1920 * 1080 (UHD area) + 1920 * 1080 * 3 / 4 ( HD only area) + 4 = 7257604 < 7372800 = 225601440)

E. Transmit the signal to the upscaler.

F. Reconstruct the image (hence requiring the ‘countable’ manner to create a mathematical algorithm).


While this is theoretically sound, it is not very practical for two reason:

  1. You assume you have an eye-tracking working in the headset and basically do “foveated downsampling” on the card, using some kind of “compressing” or “ordering” algorithm to pass all the video data. Thus requiring not trivial decompressing algorithm on the headset.

  2. You assume that the card will render it in full res.

Now, if you already have an eye position available on the card, it would be much more useful to do complete foveated rendering on the card and thus:
a) avoid the “foveated downsampling”
b) avoid wasting rendering power on the resolution which will be discarded anyway.

So you basically describe foveated rendering, but without the advantage of saving the rendering power.

On the other hand, if you could afford rendering the scene on the card in full res (i.e. 2x4K), then it would be easier just to snap additional DP cable onto the headset, scrap the scaler and all the stuff about foveated anything and just have 8K-X without any compromise.

The point is, there are two reasons why 8K exists today and makes sense with only 2560x1440 inputs:
a) It allows using one DP cable to pass the video to the headset
b) It can be powered by existing gfx cards - 2x4K full res, not likely.

While your proposal tries to address a), it comes short on b).


You got me curious on how well the 5k+ will react to supersampling. With it being easier to run and all pushing the ss numbers up even higher could really push the image quality even higher as well.

I’d really like to see an image comparison on a 5k+ with maxed out ss settings and an 8k with stock settings

I hope @SweViver @VoodooDE and @mixedrealityTV can talk a little about supersampling during their reviews


There’s no doubt that S will improve the view on the 5K/5K+. You will still have larger pixels (than an 8K), so it won’t look quite as good (more aliasing).


This decision is just getting harder for me lol 5 more days!


Tried to ask similar question to SweViver during his live stream.


Pretty sure @SweViver @mixedrealityTV and @VoodooDE will make use of all settings and tweaks to show us the best results for both panels. To be able to reduce subjective aspects to a minimum level.


Great responses guys. Glad I asked the question now and it wasn’t too daft after all.

3 musketeers - a lot is riding on the quality of your analysis… I’m sure you won’t let us down.



how resizing works is you take an image thats small, and paste it inside of a larger picture.

the larger picture is compared to the resized picture, and something is compared like a square grid. in the square grid on the larger picture thats not resized you see each grid has its own numbers that number each grid uniquely, while on the resized grid each grid shares the same number.

for a 5k compared to a 8k resize the grid on the 5k has less grids that share the same number, the 8k has more grids that share the same number.

what a sharpener and upscaler does is it separates the grids so the grids make up a picture., where a bad upscaler and no sharpener doesnt separate the grids so they all share the same grid value so details are lost.

with the 8k you get more pixels so if they are separated and sharpened you get more separated grids. more separated means more detailed and so the perception of less sde. if the upscaler separates the grids.


Mate that sounds like some sort of riddle. Explain it again plz


This might help.


using two images the same aspect ratio but different sizes, the smaller picture fits into the area of the larger picture, around the smaller picture is pixels to fill so the two pictures are the same size.

take a square broken up into 4 smaller squares inside the larger square.

  • top row left
  • top row right
  • bottom row left
  • bottom row right

the larger picture every pixel is true

the smaller picture puts the same value in the blended pixel;

  • the bottom row blends into the top row,
  • or the left row blends into the right row,
  • or diagonally the pixels are the same value

5k has less shared blended pixels than 8k does.

8k has more pixels defining the grid than 5k.

  • if the upscaler quality is good, then the added definition of 8k over 5k is useful; the extra boxes of 8k in the square are filled up to define the larger box.
  • else if the upscaler is bad then there is more blended shared pixels in 8k than 5k, then the smaller boxes in the larger box are useless and the square shares pixels with other squares it shouldnt, there being more blended pixels you lose definition and the picture is softer.


Given the recent revelations regarding required resources for attaining the same clarity on an 8K vs 5K+, I’m wondering if this will be a possibility or not? If not, then I think I will probably switch pledge, as by the time the 8K will be able to be powered sufficiently (probably talking 2-3 generations of graphics cards beyond the 1080ti I going to be using), newer models will probably be available that might have this feature?