Pimax 5k+ V2 for the future


Hi Pimax,

Let me start out by making some definitions.

5k is actually 2k times 2.
8k is actually 4k times 2.
10k is actually 5k times 2.

It may be too early to start on V2 of the 5k and 8k headsets, but it’s not too early to start discussing. I’m sure someone will disagree. :smiley:

Let’s start with the 8k. There should not be an 8k v2, or even an 8kx. There should be a 10k. 8k is such an odd number to scale up to. The 5k doesn’t scale up. If the new high end HMD is to be 10k then you’d only have to scale up by 2 and not some weird non integer number.

And the 10k should have the option to either scale up from 5k or accept a 10k signal once we get video cards to support this resolution.

But let’s focus on the 5k. That will probably be the more popular headset.

Of course further improvements to the panel will be made to improve blacks, reduce weight, and reduce screen door effect. But there should also be improvements on what is standardized on the headset.

The 5k V2 should come standard with two front facing cameras, two infrared cameras, and an infrared emmiter. The two front facing cameras should be high resolution with adjustable width. The should be spaced the width of the user’s IPD so that it’ll allow mixed reality. The infrared cameras and the emitter are for hand recognition, aka, Leap Motion, and object recognition to map physical objects into VR.

The 5k V2 should come with eye tracking standard and auto IPD adjustment, standard.

The 5k should have an auto-HMD adjustment (or the lens themselves) so the user doesn’t have to fudge with the HMD to find the sweet spot.

The HMD should not put any stress at all on the face, with the exception of the forehead for mounting.

The 5k should come with independent focus of each pixel (or pixels), or some other mechanism to develop adjustable focus per pixel (or pixels). When you close one eye you should still have some sort of depth perception using the focus of object.

As an accessory, the 5k V2 should have the option of a full body sensor. Either it will be a suit or it will be a harness. But at minimum it should track the torso, arms, hands, legs, and feet.

As an accessory, the 5k V2 should have the option of a face recognition system that can map and reproduce your face in VR while wearing the HMD.

These things should be minimum for the next Pimax HMD. I know, it will cost a lot for each unit, but this is a generational leap, and the first iteration of such a leap is always expensive. I think that was nVidia’s excuse for pricing the 2080ti so high.


You know how this is called?

Eierlegende Wollmilchsau :joy:


You so funny mate, let the Pimax deliver all the headsets to the backers first. You can worry about those stuff later.


I agree some will gripe. But anyone who knows tech. Knows ideas are always in testing for the next level. (5k+ anyone?)


You should work for the future Pimax marketing department :wink:


I present :
The PigMax 5Kg+ V2.0


Augmented reality would be a nice add on for the next version, but it’s always a risk to build a product like a Swiss knife…


No, it’s not a Swiss knife product. It doesn’t do everything. I’m listing what should come standard in a headset.


It’s a goal to work towards.


Not sure why you omitted to mention it, shouldn’t it be wireless ?

It would then make sense to also add GPS & navigation support because wireless means you can move freely in space and may only take the headset off several km’s away from your original starting spot. And I’d like to see this be paired with a collision warning system, so Chaperone 2.0, detecting not only obstacles but also holes.

Having a Siri like AI with GSM connectivity would also help you easily find a nearby hotel if the journey took you too far to return the same day.

That’s just trying to be realistic if I think about the absolutely mandatory stuff the next Pimax really shouldn’t come without - don’t get me going about what would be nice to have if we could dream for a minute…


I Don t know about the specs but it will have to be called pimax 16k minimum.


I just want full fov extremly highres working with $200 gpu.
(with nice colors and blacks, xxl sweetspot, and light&comfortable)

Thx :blush:

(btw I was wondering if AI could be trained to make other eye image from first eye, that would save a lot of GPU).


Interesting idea. I’m not sure how well that would work. AI makes strange images. :laughing: Here are a few AI images from Google’s AI investigations:

Frankly, I wonder how well nVidia’s AI antialiasing will work in practice.


If it’s like this it’d be a nightmare. lol


The specs I listed for the Pimax 5 V2 are the things VR should have standard. It’s ideal. The final product probably won’t have all those features, but strive towards it.

Speaking of features, I was thinking of of DoF in VR; that is, having where the eyes are looking at being in focused and everything else out of focus. I thought of some ideas.

Does focusing have to occur for each pixel? That would be nice, but there might be an easier approximation alternative. The focusing layer can be a lower resolution representing clumps of pixels.

For example, for each 2.5k panel the focusing layer can be 1280, or lower. That would be 4 pixels per lens. Having bigger micro-lenses in the focusing layer makes engineering easier.

I would love to see a mock-up of this to see if this would pan out.

The major engineering challenge would be designing the grid of micro-lenses and enabling them to focus in and out as necessary with a 11.1 ms or less latency.

So those are my thoughts.


Instead of fancy gaze detection and per-pixel focusing, I think a better approach would be to use light-field technology, which reproduces a natural view with real depth-of-field. Then your eyes would do the focusing, just like in real life.



I’ve looked at that. It seems promising. Let’s see if it does a good enough job approximating.

And my idea wasn’t “fancy gaze detection … per-pixel focusing.” It was just an idea trying to approximate as close as possible.

Another method might work better. But I’m throwing forth ideas.


And that’s a good thing. :slight_smile:


Those images were sent through a feedback loop to emphasize the artifacts. It’s possible for AI to reconstruct a scene convincingly. Still wouldn’t work for VR though since the data needs to be the same each frame.


Yes, I know. I was trying to be funny. I’ll go back and add a smiley.

Actually, I think it’s possible, to a limited extent. Asynchronous time/space warps do something like that. They use the depth map (z buffer) to stretch the image intelligently, to account for head motion and skipped frames. Something similar needs to be done to account for the position of the other eye.

This actually could be one mode of operation for BrainWarp: Generate the image for one eye and then construct a composite image for the other eye (warping the first eye’s image and using the (also warped) previous frame from the second eye to fill in missing details - some pixels will be unknown after the Z-buffer transform). Then alternate with the other eye’s view.