Re: Wide angle stereo - How to align and view #capturing #viewing #alignment

Bill G

Anotonio, I just responded to a post similar to yours...
but will fine tune my response to your post...
Yes, you are correct... we only have ONE reference on how to view an image, and that is what we our brain has learned to process our entire lives, our un aided vision, i.e. two images on our retinas, about 90 deg each, or 120 deg total FOV.  We only see sharp vision in the fovea, which is limited to about 2 deg FOV.  To see sharpness outside this 2deg, we must rotate our eyes, or turn our heads.  When we turn our heads, we continually take in 120 deg, that is the max.   Trying to look at a 360deg still image all at once, is just plain confusing, as we have no reference for such, its not how our brains have been trained.  This is what I referenced in my previous post... a VR multi lens capture system and VR viewing system is the ultimate solution for this, as it has the potential to perfectly simulate our real world vision, except for the inferior resolution issue as of todays technology.   Our brains are adaptable, the VR viewer does not need to see a full 120 deg, even 80 - 100 deg would trick us into believing its the same as our real world vision.   I viewed a few big name VR headsets with excellent capture of 360 deg, walking through nature and it truly was the holy grail of simulating WA 3d viewing.  All that lacked was the optics in the viewer, not quite up to par yet, as they are more concerned with cost then quality as gaming drives the market, and the bigger variable, the screen resolution was horrendous IMO.   I am somewhat of an IQ snob, as I still have excellent visual acuity.  Regardless, all the components of the solutions exists or can be easily designed / built as the base technology is there, but they have not yet been assembled in a high end product at prosumer pricing as of yet... the bottleneck IMO is the small high rez viewing screens... I would love to see dual 8k screens (or close)  in these VR viewers with good optics, and all the previous WA taking and viewing methods will be completely inferior and prob. become obsolete...  an 8k screen at 80 deg FOV is equiv to 96 PPD (pixels per deg of viewing), comparing to Snellen eye chart which most are familiar with (but not a perfect comparison), 20/20 vision is equiv. to 60 PPD, which is equiv. to 1 arc minute..
Still wondering if this is 2 years away, or 20 years ;)
but I remain on the WA sidelines till then....

On Sat, Jul 11, 2020 at 9:50 AM Antonio F.G. via <> wrote:
I have been following this topic, and thinking about it. I have tried sometimes to make stereo panoramas by stitching several photos using Hugin, then trying to stereo align them, with very poor results. The best I got was a few acceptable 2D panoramas (really very few:-). Never got an stereo panoramic worth keeping.

Thinking about it I believe there is a fundamental problem: Human vision CAN NOT process very large HFOV. 1 radian (the angle of a 36mm equivalent lens) is near the limit of what our vision understands (dragon-flies are probably much better:-)

IMO the way to view large field of view images is emulating what we do in real life: move the head (and the body) to watch every part of the scene around, but only part of it. This implies the image should be SPHERICAL, and the viewer device should convert the section of the sphere we are looking at into a flat image that is shown to our eyes. Perhaps using the VR features of smart phones, not sure whether this could be made to work.

For a stereo pair, two such spherical images would be needed. I liked the way of shooting explained by Melinda in a previous post: It would require more than one turn of the rig with the cameras looking upward to fill the upper parts of the sphere. Now we could take all the photos of the R and L cameras and stitch each of them to an spherical image. Perhaps Hugin would do it, but wait: we have to stitch the images of each side and AT THE SAME TIME ensure the L R spheres are aligned for stereo. That is, the matching points should be at the same elevation angle in the sphere. This looks feasible, more so because it is a batch task. What looks more difficult to me is how to quickly pan the spherical image in the viewer, because it requires continuous transforming from spherical to flat.


Join to automatically receive all group messages.