Re: IPD vs Taking spacing. First true 3D camera announced for 2020

Bill G

> Give a newb a MF viewer with ortho views, and the avg. person is freaked out by the realism of viewing a captured scene with life-like depth, that is the appeal.

For sure... IF the picture itself is appealing. 

         This has not been my experience at all.  I can show a newb a MF 3d pix of the inside of my garage, and they cant stop looking at it.  This assumes they have the 3d gene.  We all know, those who don't see, or appreciate depth, are not impressed.  

 The problem I see and have seen for years has been for people to argue for orthostereoscopy and then take pictures of mostly flat scenes or where most of the subject matter is twenty feet or more away.  Even though we do perceive depth at such distances, the amount of deviation is minimal and the result is an image that doesn't look much different from a 2D image.

             IF the resolution of the taking lenses, capture media and viewing system is sufficient, 20ft nears will produce the same depth effect in the viewer as it does in the real world.  The deviation will make it to the retina.  When shooting ortho, my IDEAL near distances were about 12ft.   But I have shot many nears at 40ft, and the depth effect is still overwhelming.  to transfer deviation, there can be no weak links in the chain to degrade the deviation  before it projects onto the retina.  Taking lenses, taking media, viewing optics, etc.  

David Burder did some tests on how people perceive interaxial many years ago and found that when they were asked which pictures looked "natural", they invariably chose the ones with a wider than normal interaxial.

                  This is the opposite of what my tests revealed... of course, there is so many variables not mentioned here... for example, if its a city scape, with nears at 5 miles, of course, hyper will seem more appealing.  As always, the devil is in the details, so hard to throw out blanket statements like this.

I myself was surprised to find that some pictures I took that were shot at twice the normal interaxial did look full size.  Jacques Coté showed me some portraits he took for L'Oréal where the models looked larger than life and yet were shot with a 150mm interaxial and portrait lenses.  So the brain can easily be tricked.

           Yes, the brain can be tricked... "within limits" and everyone seems to have different thresholds of where these limits are.  This is what makes non ortho so  difficult for sharing images...again, for personal consumption, anything goes.  

What I find is that mostly purists in the 3D community are "annoyed" by seeing depth intensity where there should be none.  I haven't found anyone annoyed by it outside the community.  By the way, Pompey Mainardi - genius inventor of the Tri-Delta system - was also an avid fan of hyperstereo even though his own invention was designed with a 62.5mm "normal" interaxial !

              Its NOT about seeing depth, its about completely un natural views, such as,  why do those trees look 2 inches tall??  

It depends how extreme the hyper effect is and what the subject is.  I actually very seldom shoot with a very wide interaxial myself but definitely often shoot with between a 100 to 200mm lens separation.

              Again, its not just the base, its the near and far distances... McKay wrote a good book on this issue, he studied it for years.  I followed many of his formulas.... they were well thought out, and were quite math intensive.  In the end, I abandoned this technique, as the result was so hit or miss... 1/5 images were good, 4/5 seemed un natural, an un desirable. 
                But again, details matter.  when shooting a subject such as a bird on branch, 300ft away, with the sky as the ONLY background... hyper worked remarkably well.  But these are rare scenes, i.e. short depth of field with no far or this case,  our brains do not have all the variables to distort the image.

Lucky for me, I don't need to depend on what other people claim from having at one time seen a single picture.  I have owned a LEEP viewer for over 35 years and have shot LEEP pictures for two weeks when the camera was loaned to me.  So I know exactly what this viewer can do.  And the effect is amazing.  The main two flaws are the low resolution due to the use of 400 ASA film (since the camera was fixed focus) and colour fringing due to low-cost uncorrected plastic lenses.

       We all agree, wide AFOV of viewing adds tremendous WOW effect.  The reason the LEEP system, or even current VR does not become more mainstream (IMO), is because the IQ is poor.  OUr basis of IQ is what we see with the unaided eye, LEEP, VR falls waaaaay short of our basis.  As we all know, the makers of these products are in the process of trying to advance the IQ of these systems...everyone knows the weak link is  IQ.  Great for gaming, but just OK for "fine art" viewing.   

> which supports my findings in trying to design and produce super WA viewing lenses.

There was a radical difference between Eric's approach and yours.  Eric started out with fisheye distorted images and then used fisheye viewing lenses to re-establish the geometry.  You start with corrected shooting lenses and it becomes a much bigger challenge then to try and design viewer optics that won't distort your images.

                 Not sure how you knew all the research I did??  I do remember sharing a "few" things with you, but certainly not all.   One avenue I spent over a year researching, is to alter the optics designs, to reduce the design criteria and let distortion go.  Optical software can perfectly graph the optical distortion pattern on an X-Y graph.  I then anti-distorted the captured images digitally to match the lens distortion.  I shot brick walls to run these tests.  The results demonstrated just how complex optics design and execution is.  Without boring this list to tears, the short story of the findings was as follows.  The "eye box", i.e. the area of viewing, whereas the image center, lens center and eye lens center are all concentric, this goal "in theory" can be achieved. However, the tolerance levels of the eye box, were so small, it would NEVER be practically to keep all 3 of these variables concentric in the real world.  While this anti distortion would work perfect on my optical bench with tremendous precision in the alignment... it only took 1mm of physical movement of one of the variables creating non concentric alignment, and distortion returned.  In addition, there was another distortion variable that never was discussed in 3d viewing which I discovered, I coined the term,  distortion rivalry.  This occurs when the two sides have a different form of distortion.  Now the brain must contend with a new form of rivalry (distortion variance in the two views) which the brain does NOT contend with compared to our unaided vision.  This is another source of tremendous viewing stress, all deteriorating the viewing experience.  

I had the benefits of having access the best optical design software, optical labs in the USA... Eric was doing his work long before these sophisticated tools were available.  BTW, even with the anti distortion system, the requirements for MTF I kept high, which still forced the use of high min. 5 element lens design using high end glass and coatings.  Also, the WIDE AFOV, produces optics of very wide diameter, assuming sufficient ER, which is mandatory to cover those who wear specs.  To attain these views, it would be impossible using one, or a few plastic elements.  

This is why  in previous posts, I mentioned, it will take a massive breakthrough in optical design to overcome these limitations.  To make a high resolution optic, with a super wide AFOV, that is light weight, small, etc, would defy physics as we know it today.  Hence why I am hoping for a VR with eye tracking to simplify the optical requirements.  Or even more ideal IMO, is a mid range viewer to rid optics completely, or the holy grail IMO, 3dtv 8K.  Seems we are soooo close ;)

My own goals were closer to yours in that I wanted to have wide angle camera lenses that did not have fisheye distortion.  That's what I didn't like about the LEEP camera.  Every picture was fisheye and could only be "decoded" in the LEEP viewer.  Back in the early nineties, I got involved in a project for a wide angle MF camera that had such lenses.  But then I discovered that trying to find appropriate orthostereoscopic viewing lenses for it was a nightmare - as you found out yourself.

           I did accomplish this to a degree, 60 deg AFOV (not the 90 deg holy grail) with breathtaking MTF, no distortion, no color fringing,  etc.  But again the optics were the size of your fist, and weighed almost 2lbs each, and would cost about $2k each.  

> Once they can produce higher resolution displays, such as 4k per eye

BTW, the Cinera uses two separate large rectangular displays 2.5k resolution each.  The result is impressive.

          can u imagine the jump to 5K... but again, optics IMO will always be the weak link in the chain with these close viewing systems, vs. a non optical viewing system, till eye tracking is introduced...or a means to keep the eye looking straight forward ONLY.

> This will allow low cost, and light weight lenses to be used with excellent IQ in the center portion only.

There is a LOT of research work going on at this moment and miniature displays (smaller than a penny) have been shown that have QHD resolution.  Several companies are now working on small high resolution VR "glasses" (as opposed to "headsets") So we are going to get there. (see attached)

               AGreed, even with reduced AFOV, a pocket viewer that you can insert a memory card, would be another holy grail... i.e. the holy grail for 3d portability ;)

As for the issue of my holy grail of viewing and taking system.  Yes, there is a few 180 deg taking systems on the market that are good... but as I mentioned " a complete taking and viewing system" is my holy grail... their no viewing system to utilize these captures at a level sufficient to the taking system.  The market is much more advanced on the capture side, as it can steal technology from 2d capture, not true for viewing, hence why it will always be the weak link for a complete system, till a big maker designs with a taking and viewing system mindset from the start.  

Join to automatically receive all group messages.