2017 (Q4) Bandwidth/Refresh Summit


This Topic is for discussions on Bandwidth & Refresh optimization & stabilization.

To keep this Topic clutter free. Please only those with technical skills in hardware/software & PiMax Team members post here.

As of such this will be my only post here, other than Maintaining the lists of contributers & PiMax Staff below.

PiMax Staff:
@PimaxVR @xunshu @Matthew.Xu @bacon

@LoneTech @brian91292 @sjefdeklerk @VRGIMP27 @risa2000

Pimax 8K Refresh Rate Progress
5K/8K Official Updates
Pimax 8K Refresh Rate Progress
Pimax 8K Refresh Rate Progress
Question: Was the Kickstarter prototype a huge fraud?
pinned #2

split this topic #3

A post was merged into an existing topic: Pimax 8K Refresh Rate Progress


@xunshu, @bacon: I’ve spent a little time considering the limits of ANX7530 and MIPI-DSI. It seems to me it could be possible to bypass some of the limits by exploiting others. In particular, I believe you don’t yet have 100% width utilization due to the optics? When reading the specs for Sharp LQ101R1SX01, as a sample of a MIPI-DSI display panel, it appears there’s a Command Mode in which arbitrary placement of the incoming video data is possible (with 64 pixel granularity). This mode would not be dependent on things like horizontal porch timing, and would allow drawing a 95% width image at 90Hz within the 720MHz pixel clock limit, if the ANX7530 can be programmed to use it and your panels support it. This would be most relevant to the 8K-X model. For the 8K model, you also have a scaler function, which may have other limits. I’m also only guessing at whether the ANX7530 can support Command Mode image updates, and whether your display panel does. Would you be willing to share documents on these parts so I could look for similar hacks?


Elaboration on porch timing trick: Most displays operate on a fixed pixel clock, with some subdivision of it used for horisontal and vertical refresh frequencies. Let’s examine, for example, 3840x2160 at 90 Hz, using CVT Reduced Blanking version 2.

$ ./cvt12 3840 2160 90 -b -v
 1: [V FIELD RATE RQD]         :       90.000000
 2: [H PIXELS RND]             :     3840.000000
 2.5: [ASPECT_RATIO]           :       16:9
 2.5: [V SYNC]                 :        5.000000
 3: [LEFT MARGIN (PIXELS)]     :        0.000000
 3: [RIGHT MARGIN (PIXELS)]    :        0.000000
 4: [TOTAL ACTIVE PIXELS]      :     3840.000000
 5: [V LINES RND]              :     2160.000000
 6: [TOP MARGIN (LINES)]       :        0.000000
 6: [BOT MARGIN (LINES)]       :        0.000000
 7: [INTERLACE]                :        0.000000
 8: [H PERIOD EST]             :        4.931070
 9: [Actual VBI LINES]         :       93.286041
 9: [VBI LINES]                :       94.000000
10: [Minimum VBI Lines]        :       34.000000
10: [ACT VBI LINES]            :       94.000000
11: [TOTAL V LINES]            :     2254.000000
12: [TOTAL PIXELS]             :     4000.000000
13: [Non-rounded PIXEL FREQ]   :      811.440002
13: [ACT PIXEL FREQ]           :      811.439026
14: [ACT H FREQ]               :      202.859756
15: [ACT FIELD RATE]           :       89.999886
16: [ACT FRAME RATE]           :       89.999886
20: [H BACK PORCH]             :       80.000000
21: [H SYNC RND]               :       32.000000
22: [H FRONT PORCH]            :       48.000000
23: [V FRONT PORCH]            :       23.000000
# 3840x2160 @ 90.000 Hz Reduced Blank (CVT) field rate 90.000 Hz; hsync: 202.860 kHz; pclk: 811.44 MHz
Modeline "3840x2160_90.00_rb"  811.44  3840 3888 3920 4000  2160 2183 2188 2254 +hsync -vsync

In this example, we have 4000 pixels per scanline, of which only 3840 are used; only they are actually read out of the line buffer in the ANX7530, for instance. The unused cycles are spent on back porch, sync pulse, and front porch (rows 20-22). However, in common readout modes (e.g. not the Command Mode mentioned above), there’s nothing in the signalling to distinguish the porch and active regions; it’s merely that the active regions end up displayed.

If we have, for instance, optics that only manage to display 80% of the screen, but are shifted so the optical center may end up in a variety of positions on the screen, we can reduce the active region by changing our front and back porch. For instance, we could grow the front porch and reduce the active pixels, adding a black band on the left side of the panel and reducing the input pixel rate required. We could thus tell the DisplayPort receiver, which is rate limited to 720MHz pixel clock, that the panel is smaller than it really is. The panel would receive the same signal, only the band seen as image by the panel but porch by the bridge chip would be entirely black. The neat thing about this porch tuning is that it has no alignment limits and could be done symmetrically on both displays without issue.

We won’t reach exactly 720MHz effective (active/visible) pixel clock because we still need to spend some time on sync and audio transfer (which is typically done during the vertical retrace in DisplayPort - a whopping 376000 cycles in this example, which is admittedly too fast at 811.44MHz). But by altering effective horisontal resolution between buses while keeping horisontal sync frequency, we can probably get pretty far.

If we work backwards from this mode and HBR2 transfer rate we get a total horisontal pixel count of 5.4e948/10/24/90/2254 = 3549 (and some change) instead of 4000. If we then assume we can squeeze the horisontal sync and porches into just 13 cycles of the pixel clock (this is only between GPU and DP receiver, so not restricted by panel limits) instead of 80+32+48=160, we reach a possible resolution of 3536 (221*16), which is 92% of 3840, easily over the 80% horisontal utilisation suggested in the FAQ (under pixels per degree). On the 8K similar tweaks are possible but should preferably also be done in the scaler, to get the most out of what pixels are sent.


Thanks, I will forward to our driver expert.


Thank you. You will find that this is more of a firmware than driver feature; this timing data is passed from the firmware via the EDID. The porch shifting, however, would require cooperation with the driver; it would be associated with the IPD adjustment.

I’ve also been thinking about a variant of pixel stretching. By exagerrating the screen angle in the projection matrix, we can make the peripheral vision part of the rendered plane pass closer to the eye, and therefore get a lower number of pixels per degree, in exchange for a higher number in the central region. The stencil mesh can be used to cut the plane to shape. This trick would be done to make traditional single view renderers produce a more balanced pixel distribution. I haven’t done the full math though, so it might not be a solid improvement.

Either way, as you know getting the 8K to perform well is a higher priority. Could you get an answer about my question above?


I wonder if they have given thought to Carmack’s idea for deep interlacing the display. That would allow us to utilize all lines of resolution visible to the lenses in a very efficient manner, and would also help with the bandwidth constraints. Carmack mentioned that he made a prototype during his GDC talk, that it refreshed in a swizzle pattern, and that it worked well.

Doc OK gives his thoughts on interlacing in this thread.

80hz (that Pimax can handle presently) would be 180 fields per second.


A lot of people seem to forget that classic TVs with their interlacing at low frequencies didn’t look good just (or even mostly) because of persistence of vision, but because of slow phosphors and unfocused lines. The screen was blurred in both a resolution and movement sense, so the discrepancies from field to field wouldn’t dominate too much. In VR, we’ve so far actively worked against the delay blur (high persistence) and the focal blur has been insufficient to cover the gaps (screen door effect), let alone blending with nearby pixels, though some devices try with diffusion screens. An odd observation is that I wouldn’t be surprised if you could interlace some LCD panels by driving them an entirely out of spec pattern - basically double the horizontal retrace pulses. Where CRTs achieve the half-line offset by literally drawing a half scanline (this is why the bottom scanline is halved), LCDs take the horisontal sync pulse to mean the next scanline should start receiving data. Higher resolution ones, like the Sharp display I mentioned in the first post, do have interlacing in an odd-even pixel pattern. Given that our display panels now are digital devices and effectively hold a frame buffer, I would think it might make more sense to move to a compressed delta scheme. Let all the pixels behave in a hold-and-modify style, with new changes fed over an aggressively compressed signal that can skip over static parts, no matter how detailed. A modern example of an interlaced display is high resolution image shifting projectors.

Side note: the temporal discrepancy between eyes Doc Ok described for the Oculus DK2 is precisely what “Brain Warp” tries to exploit, by doing reprojection on the later eye to match its presentation time. The new Pimax HMDs don’t do the rolling exposure, however, and the 5K/8K might actually have to drop to a lower frame rate to order the video signal for the eyes in such a manner (ANX7530 is designed for a horizontal split, in which both eyes receive data practically in lockstep). For the 8K X it does not matter as they don’t even share a cable, and they could be phase offset if desired. Top-and-bottom or frame packed modes are notably less efficient over the wire than side by side, as they multiply the blanking periods. (I could be mistaken in that the panels could be short scanline form in the first place; that would also lose efficiency and could explain some of the refresh rate problems!)


@LoneTech Reading your posts here I understood that you assume 80% of horizontal space is what Pimax understands as 80% utilization, right?

Next, I realized that I never considered audio, encoded in DP stream. I guess this increases the bandwidth requirement for ANX7530. Do we know, what audio encoding/streaming is used by VR and how it impacts the overall bandwidth?

Concerning your suggestion about restricting the “rendering view” on 4K panel by using command mode to redefine the image placement in the panel, I assume it would require using DP aux channel to pass those commands to the HMD. Is it possible with current Windows implementation? I was trying to find out any info on Nvidia cards and user access to the DP aux channel, but did not find any?


Yes, I assumed there was horizontal space going unused. That may or may not currently be the case but it’s a sacrifice I would be willing to make to push the frame rate a little bit. As noted, I ended up sacrificing 8% of the resolution for 12% faster refresh rate, and that’s while assuming I couldn’t tune the vertical timing.

As for audio, it is likely negligible. Just the vertical blanking period here, even with my reduced pixel count mode, represents some 640Mbps. Compare an unusually (perhaps even ridiculously) high quality signal of stereo 24 bits per sample 192kHz; that’s about 9Mbps. Audio tends to be insignificant, as capacity goes.

As for the command mode shifting, it doesn’t need to be controlled by the host computer, and even if it were it would likely be easier to signal over USB (where the IPD position itself would be reported, as well as tracking data) than using the DisplayPort.


I wonder if there has been any progress in stabilizing the refresh rate around 90 or not …:slight_smile:

@Matthew.Xu @xunshu @bacon


I just wonder how much latency does the scaler could generate and if it will be annoying…


5120x1440 @ 90.000 Hz Reduced Blank (CVT) field rate 90.000 Hz; hsync: 135.270 kHz; pclk: 714.23 MHz
5120x1440 @ 80.000 Hz Reduced Blank (CVT) field rate 80.000 Hz; hsync: 119.680 kHz; pclk: 631.91 MHz
5120x1080 @ 90.000 Hz Reduced Blank (CVT) field rate 90.000 Hz; hsync: 101.430 kHz; pclk: 535.55 MHz
3840x2160 @ 90.000 Hz Reduced Blank (CVT) field rate 90.000 Hz; hsync: 202.860 kHz; pclk: 811.44 MHz
3840x2160 @ 80.000 Hz Reduced Blank (CVT) field rate 80.000 Hz; hsync: 179.440 kHz; pclk: 717.76 MHz

It is known that ANX7530 has limit of 720MHz pixel clock.
Implications for Pimax 8K:
5120x1440@90 pclk is 714.23MHz, very close to limit, stability in risk
5120x1440@80 pclk is 631.91 MHz, should be stable, 80Hz confirmed stable by Pimax.
Implications for Pimax 8K X:
3840x2160@90 pclk is 811.44Mhz, much above ANX7530 limits, will require new chip, that explains delay and price diff between 8K/8K X .
3840x2160@80 pclk: 717.76MHz even closer to 720 than 2x1440p@90, probably not stable, hence the need for new chip.

Conclusion, if Pimax rush with delivering 8K to backers, they may get 2x2560x1440@80Hz, for 90Hz it was noted that there is backup input res of 2x1920x1080 source .
If Pimax can solve 2160p@90 native per eye with new chip on 8K X, I think this option should be offered to 8K backers who prefer to wait for 90Hz, or probably that chip will not have enough slots for 2 screens, since 8K X use 2xDP, and 8K 1xDP… Confusion with 3 products is real :slight_smile: .
I am excited about that solution for native 4k@90/eye, since that can translate to Pimax 4K v2, that could use one of 8K X screens, and be great alternative to Win MR headsets, if priced correctly.


@Matthew.Xu Any news on the questions asked in here, in particular @LoneTech suggestions regarding the pixel clock?

I wonder how did Pimax originally plan to overcome the pixel clock limitation of ANX7530 in the design, when it had to be clear right from the chip spec that 720 MHz pixel clock is not enough for a naive implementation of 90Hz refresh.

Did you plan to reduce display active area (as suggested by @LoneTech), or did you plan to operate ANX7530 out of the spec? And if the latter, did you test such an operating mode before committing to the DisplayPort design?

Compiled specs sheet ["8K" V2 vs "8K" V3 vs "Vive" vs "8K" X CV vs "8K" CV] *UNOFFICIAL*

@SpecsReader I am not sure what is your point here. I have asked @Matthew.Xu about the limitations of ANX7530 and you are coming up with an explanations about panels.

Please, do not answer for Pimax, and do not state your speculations as the answers.


Some good information but also a bit of speculation here and there.
I beliefe or hope that the team is developing the best possible solution and I would be very happy if maybe till christmas they could state on what they are planing or doing to get to a satisfactory image quality in this mater - would be nice to know where this is going first hand rather than hoping for the best.


He is right you are speculating as we do not know the panel model or info. The bridge chip does seem to be the factor. The Bridge chip does state 90hz as a max value & as Jim pointed out often electronic componets often do not perform at max advertised spec. Gigabyte advertises Theoretical performance & states if componets work at best spec.


Well Pimax marketed 4K HMD as 90Hz async, so what stops them from using same method on 8K…
My assumption, and would like to proven wrong, Brainwarp is “invented” to solve async MTP problems from 4K. Without Brainwarp, you will just get more sick at 90Hz/eye than on 75Hz/HMD. Give us facts, not examples how someone markets 90Hz as boost, when industry is having 90Hz stable standard for 6DOF VR…


The 4k model is not the same panel as the 8k panels. The 4k uses a sharp panel at 60hz.

As i and others have used the v2 prototype we can assure you its not the same panel & the sub pixel layout is somerhing like rgbw where as the 4k is rgb