Treat both eyes as half of a single display like DJI FPV goggles



@Heliosurge @SweViver @Dallas.Hao DJI’s FPV goggles do something interesting. They show half of a monoscopic image 1/2 on each display across the HMDs two displays (like an extended desktop multi monitor setup.) to give you one Very HD image.

I have realized that if you did this on the Pimax 8K non X you can bypass its present bandwidth and resolution limitation in software while trading stereoscopy for one very high res image.

Since we can drive stereo 1440p at 80hz, putting one 5K image across both of these panels (1/2 each) would give you a single real 5120x2880 picture in the same bandwidth and FPS, just monoscopic.

You could even use a shader and time warp to get perspective depth from these 2D images. Still 2D, but with a lot of depth cues intact at double resolution. (picture OG Star Tours or California Adventure at Disneyland)

It would be like playing games in an enormous Imax theater. Seems like a workable idea since presently you are already rendering higher than 4K images on both the 5K plus and the 8K. Offer a monoscopic mode so that we don’t just throw all that extra rendering work away!

Sure, you lose some depth, but for sims and virtual desktop applications, you have something viable for all of your present customers at an insanely affordable price done only with software. This should be doable completely in software for both 5K+ and 8K.

[Pimax Response Required] 8K upgrade path for 8KX? Or 8K Lens and screen replacement / Revisions msde to RE models?

Definitely seems something to check into. A dev might be able to work some kind of demo maybe.


I don’t understand why everyone throws sims under the bus.
Whenever anyone around here is discussing making the experience less real, they say it will be good for sims. Sims…where you simulate real life.
I think you’re all doing it wrong.


Definitely not throwing Sims under the bus! I’ve been playing Sims since the 90s. I love Sims. I just want there to be a way to get the headset to have greater resolution, because otherwise, what’s the point of having two extremely high resolution panels, if we don’t get to fully utilize them?

There’s also research that shows that getting rid of stereoscopic 3D, but using perspective to bring out depth cues, to our visual system at least, is almost as good as stereoscopic.

Because stereoscopic 3D is really just a trick with its own problems. Vergence accommodation problems, if you use eye tracking, retinal blur is done improperly etc., whereas if you go to a monoscopic display mode, many of these problems are mitigated.

Definitely wasn’t meaning to dunk on Sims.


Not to worry “The Sims” is an interesting Game & am sure VR version coming. :beers::smirk::+1::sparkles:


Sorry to sidetrack. It’s an interesting idea for other applications.
I’m referring to a history of a hundred other people, so don’t take anything directed personally.
It’s just a phenomenon around here that if you’re limited in some fundamental way and can’t experience vr as it should be…then that’s okay for sims. It’s quite the opposite. Sims are a training tool for reality. If it doesn’t approximate reality, it becomes quite useless…and time to play games instead.
The 3d perception and head tracking is the greatest advancement for sims.
Sorry again for distracting from topic.


I agree in terms of training & better immersion. As when you turn your head with only 3dof you see the headrest.

But Simulators are still quite enjoyable much like seated gamepad games. But agreed a more involved experience would be healthier & welcome.

CraneVR project is a good example with other Heavy equipment training sims. Though would argue a motion seat would be best addition for a more real feel xp. But most of us home users can’t go that far due to space & cost req. Also most sig others are not as understanding. Lol


Light-field display vr instead of stereoscopy.
I heard a few display companys develop light-field displays: JDI and Samsung.


The issue with light field displays is that they are an insanely higher computational cost to run, as well as being very low resolution. You solve blur, vergence accommodation, and other focus issues, but you can do that now with monoscopic rendering.

What I’m really getting at is that Pimax’ solution for improving quality, for getting better performance, Etc seems to be a Brute Force approach of making newer HMDs. There is nothing wrong with that, but then everyone needs to move up a tier.

8K X is going to be absolutely amazing, but it’s likely cost is going to mean a lot of people that currently own a headset, are probably going to want to go up a tier on top of all the money they’ve already spent.

So, I’m asking myself what can we do now with the current headset that everybody owns to mitigate some of its shortcomings.

The fact remains right now that you need a 2080ti TI to truly enjoy this thing.

If you were only rendering one eye, off the bat, you have cut your performance requirements roughly in half.

For another thing, being that it’s a VR device all the games provide you with a depth buffer. This means that you can use perspective to get near stereo 3D effects at half the computational cost.

People have to get the misguided idea out of their heads that stereoscopy is the only way to give your visual system depth cues. It’s a batshit crazy in false idea.

You brought up Light Fields is an example. In that situation, your rendering 16 + views at different perspectives and planes.

There was a lecture a while back where Oculus talked about their new deep learning based Deep Focus algorithms for verifocal and light field displays.

In the lecture the presenter made an off-handed comment that it was possible to do variable Focus on a monoscopic display as well as to use the depth buffer and machine learning for light field rendering done from one or two images.

So at least in principle, you do not need stereoscopic images to get 3D that is authentic and works well with our visual system. Artist known this since the Renaissance.

I really think that people are overestimating the effect of stereoscopic depth, especially when people complain about the scene being out of focus most of the time.

I think it would be worthy of an attempt to at least test a monoscopic rendering mode.


Let’s not forget paralax(sp?) 3d displays that do not require 3d glasses or Amazon’s 3d phone with eye tracking ir sensors.

Or Avegant’s efforts with projection.


What is paralax(sp?) 3d displays?


I might not have it spelled correctly. There 3d Displays that do not require 3d glasses. Ie Nintendo 3DS. LG had a 20" computer monitor & while short lived there was 3d smart phones.




Not exactly. Though depending on what advances have been made might be an intetesting revisit.


Nintendo 3ds display technology


I think you guys are kind of missing the point of the thread. The point is, if you just do a 2D image, and you use a Shader, togetger with the depth buffer to modify perspective, (like they used to do in renaissance art,) we can get around the limitations of the scaler, and the frame rate limitations of the current 5K Plus and 8K model headsets.

I’m not talking about parallax, lightfields, or anything like that.

SweViver demonstrated this kind of rendering method when he did a video playing fallout 4 in VORPX. Remember @SweViver your video of using fake 3D in vorpx? You pointed out how’that version ran smoother then the native VR implementation.

You could use shaders (like a sweetFX shader) to force perspective in the scene, along with using the “fake” depth buffer 3D to get a good sense of stereo depth without actually having to waste the GPU resources rendering two images.

The idea here is to give us a mode for a smoother, higher fidelity and higher resolution experience on lower-end Hardware, by getting rid of rendering multiple images per eye.

This will be good for customers who can’t afford the 8K-X


Yes fpv hmd (one display) + light-field technology (3d) or autostereoscopy.


A light field requires you to render more than two views per eye, (1 view for every angle of a 3D object) so think 8X the cost of current rendering to do that.

I’m talking about rendering one image, and using shaders and the depth buffer to get 3D perspective, without actually rendering 3D.

Look up perspective in Renaissance art. There is one point, 2 point, and three point perspective.

, this is where you are drawing in 2D, but you are forcing the depth cues of the scene to be apparent. If you combined that style with depth buffer “fake” 3D you will get a convincing 3d effect without actually rendering in 3D.


Light field have 1 render image on 2 display


I don’t think you watched this video. This is a light field stereoscope which uses two panels, and image synthesis to create the multiple views on the focal plane.

I’m suggesting something that is way less computationally intensive