Unless we are talking about specially tailored hardware solutions(not of the shelf), bandwidth stays the same, if you render part of the frame in low res, you are still sending full resolution frame overall, because hardware expects that. Basic upscaler cannot comprehend anything but full frame, it would require custom hardware upscaler, or some sort of preupscaler frame assembler, once again absolutely custom.
Pimax or a 3rd party might be able to create a software that tells the displays how to handle multiple resolutions. Even if we can’t do native resolution, we could always use the benefits that normal foveated rendering bring to increase the super sampling like crazy. Maybe it would end up looking identical to 4k. Maybe even better than the default super sampling of the 8k x if we can go really really high.
So a software solution to a hardware issue? Do tell. Unless Pimax is hiding an FPGA somewhere in the HMD, that is simply not happening. The issue is not different resolutions, but non standard frames, to lower bandwidth you need to assemble the complete frame after transfer, which would require custom display port chip at the very least. In short, very-very low possibility of that.
Foveted rendering is a solution mainly for high GPU loads, not very much more. Even then there is no magical driver tech that will enable it for anything and everything. It must be implemented in software, and there is no standard for that currently, and Khronos isn’t even planning it for the first version of their openXR.
I now understand that it may not be possible to run at native res even with foveated rendering. Although foveated rendering could still be used for increasing supersampling much higher without sacrificing fps. Maybe even to the point where it looks just as good as native 4k. Also yes, foveated rendering will likely need to be implemented into the game by the developers.
Wait, isn’t a preupscaler frame assembler basically what I said? “The frame is then sent through a software upscaler that upscales the lower resolution area outside the native resolution center to fit the screen”. Even if that’s not the same thing. Couldn’t the frame be assembled in software before it’s sent to the headset?
Yes, in fact it MUST be assembled in software, since the headset hardware can’t do it. But the frame is NOT native 4K on the Pimax 8K, the frame is 2560x1440, because of bandwidth and scaler chip restrictions.
What do you mean by assembling the frame?
Composited. Two images overlaid. The outer coarser image combined with the inner detailed foveated image. The composite frame must be sent as a regular image at 2560x1440 (per eye), which is the size expected by the headset.
Okay… So if they could do that in software, what’s stopping us?
At the moment, it requires special coding by the developers of the game/game-engine. It’s still in the experimental stage and there’s not enough demand yet. Honestly, I don’t think foveated rendering is the panacea that many seem to think it will be. It’s probably at least 5 years from being widely supported.
5 years? 2023? That’s like, really pessimistic. In fact, it’s actually unrealistic. The way that vr is progressing, it’s very likely that we’ll have 8k per eye headsets before 2023. Isn’t apple already confirmed to be making an 8k vr headset for 2020? There’s no way they’re gonna be able to power those things at 90hz without foveated rendering. Unless the new Nvidia gpu’s are amazing. Isn’t the varifocal headset Oculus made gonna have eye tracking too? I’d estimate foveated rendering will definitively be a thing used in vr headsets before the end of 2019.
Sorry, I’m a realist. That’s my time estimate for when foveated rendering will likely become mainstream. Sure, there may be a few apps/games that support it sooner, but actually, I think the 5 years prediction is optimistic.
Note that I’m a software developer, with experience creating games. Look at the delay between the release of a new version of DirectX and games which exclusively support that version as a minimum, as an example. I don’t think we’ll see widespread adoption, until at least 40% of customers have a high-res VR setup and the customers won’t be buying high-res headsets until the games support it and customers have video cards that can run it (probably a GTX 1360 card). It’s the classic “chicken and egg” scenario.
I hope I’m wrong, but I feel certain that “the end of 2019” is even more optimistic than Pimax’s estimate of January '18 for the shipment of the 8K.
Uh…what? They’re shipping like this month or next month. That was an old kickstarter timeline. Off course it was too optimistic. But now that they’re satisfied with the hardware, they can move onto mass production. Also, I have to disagree with your “realist” comment. I think you’re being straight up pessimistic. Even Michael Abrash predicted foveated rendering by 2021 https://www.youtube.com/watch?v=1hkZONebFM8
Exactly. I think your prediction is even more optimistic that Pimax’s was.
My prediction is based on my experience in the software industry. There’s a difference between “available” and “widely adopted”.
High end vr wasn’t “widely adopted” when it first came out, but people still used it. I think it’s the same for foveated rendering. It’ll take time for game developers to implement it, sure. But if the performance gains are good, more developers will implement it into their games.
High end vr still isn’t adopted yet. It’s been trappered at the business level. Pimax’s release will change it so it’s available to the “joe” consumer.
However @neal_white_iii is right. FoveVR has been around for a long time & still not very adopted. Oculus with the Go has implemented a fixed area.
VR is still working it’s way into getting into the mainstream market.
Foveate rendering will come in a variety of forms due to gpu power. Pimax’s headset is likely to be like Doom3 & Crysis which challenged hardware for years & in some ways still does.
Consider SLI. There are great performance gains to be had, especially for 4K monitor res and for VR, yet SLI support seems to be reduced from years past. If there’s not “critical mass”, developers will mostly ignore a feature.
That’s because for SLI to work you need two sli compatible gpus with the same vram. That’s too specific and cost much more than eye tracking. Because it costs so much, development has been slower.
why it makes the 8k-x obsolete? The thread title makes no sense.
Because the whole point of the 8k x was for it to run in native resolution.