For reference, the difference between the Vive and Vive Pro is 1.33 (plus a few hundredths due to slightly reduced fov (physically smaller display panels)). :7
I am still waiting to hear whether the simple scaler in the 8k will at the very least use content stuffed into the entire transport image, or if 20% will go to waste. Like this:
Don’t know what has changed since, but according to the ye olde Kickstarter page FAQ, the grey areas with the arrows are not visible through the lenses. As long as the scaler can use a different ratio per axis, that should not be a problem, and the transport image will be fully utilised all the way to the edges (a per the arrows), and then compressed horizontally by the scaler; But otherwise, if those black bars exist in the transport image, that’s pure bandwidth waste.
Then of course, with just the slightest increase in scaler complexity, and without even considering foveation, it should have been possible to do something like:
…where each darker band could be scaled to half the width (…and height for the top and bottom ones), just as a function of the lenses and anisotropy, without any discombobulating the image, stepping away from fixed discrete nubers, nor breaking rectangularity in any way – do any of those, though, and there’s more to save.
(EDIT: Major caveat: Those scaled down region boundaries would of course be subject to the pincushion distortion of the lenses, though.)