Brainwarp is not something panel manufacturer cares about, panel refresh rate is same, chip that is controlling the signal (ANX7530) to LCD and some logic before I guess, maybe it's part of the chip, to enable displays to refresh with some time in between, not simultaneously.
And I was wrong about rendering at higher FPS(120-180) monoscopic, chip used doesn't support that mode, so it will be for example 5120x1440@90Hz/FPS no 2560x1440@180, so no need for change in rendering pipeline, GPU works just the same. It is similar to technique used by PSVR, there is case where console render 60FPS but it is doubled to 120 to match 120Hz possible with PSVR HMD(OLED). Pimax will not double 90FPS to 180, but actually show half of image half of the time(T=(1/RefreshRate)/2 s), so it's simulated double Hz/FPS, interesting but not proven concept yet.
Good news is that Brainwarp will probably work with any VR game, bad news, no monoscopic trick to reduce GPU load... And probably in my idea problem is CPU has to enable game to run @120-180FPS so would double CPU workload, not wise at all.
T0 = 0ms(start time) : GPU provide image 5120x1440 to HMD(buffer), left eye part sent to L display(LD), RD is off
T1 = 5.5ms(1/90hz/2): HMD already has R eye image(in buffer), display 2560x1440 on RD, LD backlight is off
T2 = 11ms: GPU provides new 5120x1440 image, left half sent to LD
T3 = 16.5ms: right half of image sent to RD .
Note: This is example for 90Hz so GPU has to render or at least provide 90 frames(some duplicated) to HMD. Also when I write sent to display(left or right), I think backlight is turned ON at that point, I assume crystals are actually set before that, so while we can't see this, crystals on display close to eye that is not seeing image are getting the part of the image from buffer. Hope someone understood what I meant .