Topics

IPD vs Taking spacing. First true 3D camera announced for 2020

bglick97@...
 


Capture lens center spacing,  significantly greater than IPD creates the "miniaturization effect" of the entire scene, for those who suffer from it.  Never seen stats on what % of the population, or exactly what the threshold is. I doubt such stats exist.   If I had to wager a guess, prob. 5-8mm.   Its my experience that reduced capture lens spacing, only reduces the total depth sensation, nothing else.  

the variance in FOV of  taking vs. viewing, creates variances in near / far size relationships vs. how the scene appears to the unaided eye at the camera positon.  This can also  surface as stretch or compression as well.  Same as 2d.    How strong this undesirable effect is.... depends on how significant the difference of taking vs. viewing FOV, combined with the persons threshold. 

So, lots of issues to get right...getting one of the variables ideal (capture lens spacing), is always helpful to achieving believable / accurate 3d views.  But as you correctly state, the taking and viewing FOV variable still exists, unfortunately.   This IMO is one of the hardest things to overcome for standardizing 3d imagery in our modern 3d world.  The variances are huge, 40 deg to 80 deg ;).  

  The 1950s had all the variables very close to standardized, making for imagery that "could"  transform from one system to another.  (even though this never happened)  While Realist images in Realist viewers would NOT WOW you by todays standards (low viewing FOV, poor lighting, etc.), the 3d effect was spot-on, aka, Ortho.   It seems today, super wide FOV as seen in VR systems has lead the race...making this imagery only usable in those VR headsets.




On Mon, Jan 13, 2020 at 11:47 AM depthcam via Groups.Io <depthcam=yahoo.ca@groups.io> wrote:
> Wow, at 65mm that makes this the first 3D camera I can purchase with precisely my interpupillary distance.  I doubt that will make a huge difference....


Not likely because of the FOV differences between recording and viewing.


> Of course, I'm assuming the Insta's synch will be better than Lucid's.


You can't compare the two.  Insta360 has been producing multi-lens cameras for years and their cameras have a very good reputation as quality products.  Also, just last year they introduced a 3D 180 camera (the EVO) and the sync on that one is excellent.

Francois

 

Capture lens center spacing,  significantly greater than IPD
creates the "miniaturization effect" of the entire scene, for
those who suffer from it. 
I'm one of those folks. While I appreciate a good hyper, in
general it's not my favorite kind of 3D image and not anything
I've been interested in creating myself. Different strokes...

So for me I want a camera that is close to 'normal'. A bit wider
interaxial is only a problem, for me, in terms of limiting how
close I can get to the nearest object and still include infinity
in the shot. (And I do very much believe in the axiom: If you're
pictures aren't good enough, you're not close enough.)

Ortho is an ideal worth pursuing and I admire those who pursue
that ideal. But for me it is just too limiting in terms of
available cameras and viewers to keep the capture/viewing
parameters in strict concordance. I don't want to have to use
one specific camera/focal-length for my 3D TV (and the distance I
view it), and a different setup for each of my digital viewers.
But if somebody made a *really* good 3D camera and digital viewer
combination that achieved true ortho and only ortho, I'd
certainly buy it! (Well, assuming I could afford it.)

Sp where is the digital equivalent of the LEEP Panoramic Stereo
Photography system?

http://www.leepvr.com/

Is it just a matter of getting the 'right' digital 3D VR camera
(perhaps the Insta360 One R?) and an Oculus Quest?

...BC

bglick97@...
 

I agree with your positions...
I should have expanded one part though...
I have found, the capture and viewing FOV does not need to be a perfect match, i.e. true ortho.  It seems our brains have plenty of leeway in this area.   For example, using a fixed FOV viewer, capture FOV can vary +/- 30%, and maybe as high as 50% based on all the variables such as, resolution of images, viewing optics MTF.  So there is some leeway with no negative consequences for sure.  I agree, sometimes you cant capture an image unless you use a wider lens. same as 2d.     

As for the LEEP viewer.... I wrote about this on this forum, a few weeks ago.  The pitfalls of this ever becoming reality is the limitation of optics for close viewing.  The level of complexity for wide FOV optics is beyond the size and price point for 3d viewing.  I spent years trying to accomplish it, and you end up with hand-granade size lenses, and even this will only get you to maybe 55 -60 deg HFOV.   To accomplish wide HFVO optics, such as those in the VR viewer, the MTF and distortion is not favorable for fine 3d viewing, but suitable for gaming. 

 Hence the value of 8K tvs, now we have enough pixels and contrast in a display, but without 3dtv avail in 8k, we are at a roadblock again.  I think side by side images would also be damn good, but, still no easy way to view, other than cross eyed, which is not ideal for the masses and certainly not for video.   Mirror viewers are cumbersome, and limit the FOV, cross talk, too many issues.

I agree with your suggestion of a 3d system, camera + viewer.  Similar to Fuji W3 and their 3d viewer, unfortunately, this was early  in the electronics revolution of viewers, and once again, the viewing was the weak link, and it died off quickly, as Fuji would cant make electronics for a few hobbiest, it must appeal to the masses.  I think the only  hope for what you suggest is a revolutionary phone maker, they have the capital and technical expertise to make a high rez phone screen and hopefully a 3d screen.... RED has done this already, but, I never saw the 3d screen, but its a start for sure, now we need Samsung or Apple size company to try their hand in 3d.  Its the perfect marriage as they sell the viewing screen and camera in one package already.   Maybe when they start running out of features to sell phones, a 3d system  will be considered.


On Mon, Jan 13, 2020 at 2:06 PM Bill Costa <Bill.Costa@...> wrote:
> Capture lens center spacing,  significantly greater than IPD
> creates the "miniaturization effect" of the entire scene, for
> those who suffer from it. 

I'm one of those folks.  While I appreciate a good hyper, in
general it's not my favorite kind of 3D image and not anything
I've been interested in creating myself.  Different strokes...

So for me I want a camera that is close to 'normal'.  A bit wider
interaxial is only a problem, for me, in terms of limiting how
close I can get to the nearest object and still include infinity
in the shot.  (And I do very much believe in the axiom: If you're
pictures aren't good enough, you're not close enough.)

Ortho is an ideal worth pursuing and I admire those who pursue
that ideal.  But for me it is just too limiting in terms of
available cameras and viewers to keep the capture/viewing
parameters in strict concordance.  I don't want to have to use
one specific camera/focal-length for my 3D TV (and the distance I
view it), and a different setup for each of my digital viewers.
But if somebody made a *really* good 3D camera and digital viewer
combination that achieved true ortho and only ortho, I'd
certainly buy it!  (Well, assuming I could afford it.)

Sp where is the digital equivalent of the LEEP Panoramic Stereo
Photography system?

http://www.leepvr.com/

Is it just a matter of getting the 'right' digital 3D VR camera
(perhaps the Insta360 One R?) and an Oculus Quest?

...BC



 

I have found, the capture and viewing FOV does not need to be a
perfect match, i.e. true ortho.
I have certainly not done any structured investigation of my own,
so my response is simple a gut reaction and I would agree this
certainly seems to be the case. Otherwise I wouldn't have so
many 3D images that I find satisfying on both my TV and in a
digital viewer!

The level of complexity for wide FOV optics is beyond the size
and price point for 3d viewing.
While high immersion would certainly be desirable, wouldn't
something a bit less ambitious in terms of HFOV, but well matched
capture/view with really good image quality still be a win?
Something along the lines of the IQ of a 3D medium format film
viewer would be enough to rock my world.

Hence the value of 8K tvs ...
I think my 65" 4K TV, which only displays up-scaled HD from MPOs,
is already at the limit of what I can appreciate sitting 9 feet
away. While I can get a lot closer to the TV before seeing
pixels, the passive 3D rapidly breaks down when you get too
close. So it'd have to be 8K active or some different 3D
technology to get close enough for full immersion *and* 3D. In
any case we aren't going to see that for a long, long time, if
ever. Perhaps in some sort of commercial display.

I agree with your suggestion of a 3d system, camera + viewer. 
What I can imagine would be a portable VR viewer with a 3D camera
module. You'd hold it and take pictures like using binoculars.
Something along the lines of the Sony electronic binoculars which
Dr T has been fond of. Solving the capture/view in one package
would have a lot going for it, but hard to believe it would be
something that any major manufacturer would market. Joe Average
might get a big wow factor looking at the image, but would there
be enough people that could actually see themselves using it,
particularly if that's the only way you can view the photos in
3D?

If anybody were to do it, it would be the Chinese who seem to
like 3D more than we do in the West.

...BC

bglick97@...
 

I agree with your assessment.  If there was to be a breakthrough in the near future, it will prob. come from the Chinse.  Just like they made a MF viewer and camera...seems they have the desire to create new products whose volumes are not mega, like the big names want to produce.  But it would still have to sell enough to be profitable, which has proven very difficult.  But in all fairness, a high quality system has never been produced.   This is why I think the phone is the big chance for wide spread 3d, offer 3d as an option to one of the big name high end phones.   You will need glasses for now, but for real enthusiasts, that's not a deal breaker.

I also agree with your "less ambitious" approach to a workable 3d system.  A MF viewer, on par with the 3D world is obviously very cost effective.... it holds a phone, which that phone  also captures the 3d.  If the screen has enough resolution, such as the new screens coming out this year, such as 5K, this would be a decent product, specially if it records 3d video on the phone as well.  The fact its 3d video vs. stills, would be a game changer, as it would provide a new WOW factor that a still cant deliver.  Motion furthers the depth effect as the deviation changes.   The format would be something closer to squares vs. the typical 16:9 format...again, IMO, workable.  

The 3D world viewer had 80mm fl lenses.  Approx 35 deg HFOV at 50mm wide image.  Just for round numbers, if each square ended up with usable pixel count of 4MP, or 2Kx2K pixel dimensions, /35 deg HFOV, = 57 pixels per degree.  This is on par with 20/20 vision, 1 arc minute, or 60 pixels per degree.  Hence why myself, and others on this list were discussing the new phones coming out this year.   The Sony at 5k will probably lead the pack.  It would be interesting to see how this imagery would compare to MF film in the same viewer.  The MF film has about 2x the resolution, approx. 16MP per image, 4kx4k, so the digital display would have much less resolution, but its possible, the added contrast pixels offer, will close the gap a bit.  
Also, cramming this many pixels per inch, its possible the pixels wont have quite the dynamic range vs. the current crop, which has excellent DR.   

A few on this list who have the Sony phones now, mentioned the view is very good...  would be nice to hear more opinions from those pioneers ;)


On Mon, Jan 13, 2020 at 5:39 PM Bill Costa <Bill.Costa@...> wrote:
> I have found, the capture and viewing FOV does not need to be a
> perfect match, i.e. true ortho.

I have certainly not done any structured investigation of my own,
so my response is simple a gut reaction and I would agree this
certainly seems to be the case.  Otherwise I wouldn't have so
many 3D images that I find satisfying on both my TV and in a
digital viewer!

> The level of complexity for wide FOV optics is beyond the size
> and price point for 3d viewing.

While high immersion would certainly be desirable, wouldn't
something a bit less ambitious in terms of HFOV, but well matched
capture/view with really good image quality still be a win?
Something along the lines of the IQ of a 3D medium format film
viewer would be enough to rock my world.

> Hence the value of 8K tvs ...

I think my 65" 4K TV, which only displays up-scaled HD from MPOs,
is already at the limit of what I can appreciate sitting 9 feet
away.  While I can get a lot closer to the TV before seeing
pixels, the passive 3D rapidly breaks down when you get too
close.  So it'd have to be 8K active or some different 3D
technology to get close enough for full immersion *and* 3D.  In
any case we aren't going to see that for a long, long time, if
ever.  Perhaps in some sort of commercial display.

> I agree with your suggestion of a 3d system, camera + viewer. 

What I can imagine would be a portable VR viewer with a 3D camera
module.  You'd hold it and take pictures like using binoculars.
Something along the lines of the Sony electronic binoculars which
Dr T has been fond of.  Solving the capture/view in one package
would have a lot going for it, but hard to believe it would be
something that any major manufacturer would market.  Joe Average
might get a big wow factor looking at the image, but would there
be enough people that could actually see themselves using it,
particularly if that's the only way you can view the photos in
3D?

If anybody were to do it, it would be the Chinese who seem to
like 3D more than we do in the West.

...BC



depthcam
 

Back in film days, orthostereoscopy was my holy grail.  I really was convinced that the ideal stereo experience would be one where the interaxial was strictly "normal" - about 65mm - and where the taking and viewing focal length were rigorously matched.  I remember a fellow I corresponded with who had designed his own self-transposing 3D camera with 35mm FL lenses and had also built his own 35mm viewer using a Ramsden-type configuration.  The lenses were quite large, low distortion and provided an amazing view.

But I couldn't build such a viewer myself...Couldn't quite find the right combination of optics.  My solution was to use my spliced SLR with 50mm FL lenses and view the resulting images with a viewer that had 50mmm optics.  That worked well, but  I also had a pair of 35mm lenses and felt kinda guilty that I actually preferred the wider non-orthostereoscopic images to the orthostereoscopic ones !

I also foudn myself experimenting with hyperstereo and found I liked the extra texture these images provided !  That became a moral dilemma !  Then I met a fellow stereographer by the name of Jacques Côté.  Jacques was the designer of the 3Discover viewer and his approach to 3D photography was radically different from mine.  Rather than seek to reproduce normal vision, he would try to make each picture as depth-rich as possible.  I looked at quite a number of his images and I couldn't deny that his pictures were exciting and engaging !  I thought mine were rather dull in comparison !

From that point on, I completely changed my outlook on 3D photography and started shooting to enhance the depth rather than attempt to recreate normal vision.  Because let's face it:  Normal visual experience is not really exciting.  Most people aren't even aware that they see in 3D.  There needs to be something more to give that extra sense of excitement to a 3D picture...

Not that I reject orthostereoscopy.  It does have its place.  But I feel that creativity in the 3D medium requires us to go beyond what normal human vision can do.

One requirement of orthostereoscopy that is most often disregarded is the need for the FOV to attempt to match that of human vision.  This is what Eric Howlett was trying to do with his LEEP camera and viewer.  It did work because Eric found that if he used fisheye lenses on the camera as well as on the viewer, he could achieve wide angle orthostereoscopy.

This is what - several decades later -some companies are trying to do with VR 180.  It's possible thanks to digital technology that allows us to view a virtual image and to look at different parts of it by moving our head while wearing a headset.  It really is amazingly realistic.  But for me, it is one of many ways to approach 3D imaging.

With restricted FOV displays such as we now use - from glasses-free phones and tablets to large screen TVs - I embrace a more creative approach where interaxial choice is based on the subject and the effect rather than on a strict set of rules.

Francois

bglick97@...
 

>  From that point on, I completely changed my outlook on 3D photography and started shooting to enhance the depth rather than attempt to recreate normal vision.  Because let's face it:  Normal visual experience is not really exciting.  Most people aren't even aware that they see in 3D.  There needs to be something more to give that extra sense of excitement to a 3D picture...

              Give a newb a MF viewer with ortho views, and the avg. person is freaked out by the realism of viewing a captured scene with life-like depth, that is the appeal.  To the avg. person, depth is depth, it doesn't matter how it was attained, ie. what taking base, etc.  

>  Not that I reject orthostereoscopy.  It does have its place.  But I feel that creativity in the 3D medium requires us to go beyond what normal human vision can do.

For individual use, anything goes.  Much depends on what you can tolerate.  But for the masses, when you try to trick what normal human vision can do, the viewing experience "can" become problematic.   Our brains only have ONE reference on how depth should appear.  I have seen some hypers that are required, as the subjects are way to far to achieve depth, but often its accompanied by the annoying miniaturization effect.  Many people I show wide base shots which miniaturize... their first response is, they break out laughing...    
Sometimes that would make a nice artistic goal though....

>  One requirement of orthostereoscopy that is most often disregarded is the need for the FOV to attempt to match that of human vision.  This is what Eric Howlett was trying to do with his LEEP camera and viewer.  It did work because Eric found that if he used fisheye lenses on the camera as well as on the viewer, he could achieve wide angle orthostereoscopy.

                  Eric was shooting for the Holy grail, he was way ahead of his time.  But super wide viewing lenses have tremendous limitations.  the few people I know who viewed through the viewer stated the view problematic, which supports my findings in trying to design and produce super WA viewing lenses.  


This is what - several decades later -some companies are trying to do with VR 180.  It's possible thanks to digital technology that allows us to view a virtual image and to look at different parts of it by moving our head while wearing a headset.  It really is amazingly realistic.  But for me, it is one of many ways to approach 3D imaging.

                  Agree on realism, the potential is there, but lots of obstacles to turn VR into much higher Image Quality.  Just LOVE the ability to look around a scene.   I think the next breakthrough in VR to drastically increase Image quality,  is... use the same technology that pc 3d viewing have used, eye tracking.  ONce they can produce higher resolution displays, such as 4k per eye, then the user can view straight ahead, through about 5-10 deg of lens center.  This will allow low cost, and light weight lenses to be used with excellent IQ in the center portion only.  The rest of the image outside the center image circle can show 90% less resolution, (combining pixels), as only 2 deg of our foveal vision has high resolving capability.  Our periphery vision acuity is horrendous.  By limiting such a small area with high resolution, it drastically reduces the graphics processing burden making this premise feasible with current processors. All this has  been proven in 3d PC displays, so it comes down to making it work in VR, consumer pricing, etc.    This IMO, would be a huge breakthrough for VR.... but like many 3d wishes, they never occur ;(

With restricted FOV displays such as we now use - from glasses-free phones and tablets to large screen TVs - I embrace a more creative approach where interaxial choice is based on the subject and the effect rather than on a strict set of rules.

Its been my experience, without rules, problems begin to surface.  In my dream world, ONE camera with super WA lenses, offering stills and video.  Displayed on VR set I described above.. no thinking, just focus on capturing and everything will work seamless ;)

of course, infinity shots from airplanes, etc, does not apply... I am mostly referring scenes with reasonable distance of near subjects.



On Tue, Jan 14, 2020 at 3:35 AM depthcam via Groups.Io <depthcam=yahoo.ca@groups.io> wrote:
Back in film days, orthostereoscopy was my holy grail.  I really was convinced that the ideal stereo experience would be one where the interaxial was strictly "normal" - about 65mm - and where the taking and viewing focal length were rigorously matched.  I remember a fellow I corresponded with who had designed his own self-transposing 3D camera with 35mm FL lenses and had also built his own 35mm viewer using a Ramsden-type configuration.  The lenses were quite large, low distortion and provided an amazing view.

But I couldn't build such a viewer myself...Couldn't quite find the right combination of optics.  My solution was to use my spliced SLR with 50mm FL lenses and view the resulting images with a viewer that had 50mmm optics.  That worked well, but  I also had a pair of 35mm lenses and felt kinda guilty that I actually preferred the wider non-orthostereoscopic images to the orthostereoscopic ones !

I also foudn myself experimenting with hyperstereo and found I liked the extra texture these images provided !  That became a moral dilemma !  Then I met a fellow stereographer by the name of Jacques Côté.  Jacques was the designer of the 3Discover viewer and his approach to 3D photography was radically different from mine.  Rather than seek to reproduce normal vision, he would try to make each picture as depth-rich as possible.  I looked at quite a number of his images and I couldn't deny that his pictures were exciting and engaging !  I thought mine were rather dull in comparison !

From that point on, I completely changed my outlook on 3D photography and started shooting to enhance the depth rather than attempt to recreate normal vision.  Because let's face it:  Normal visual experience is not really exciting.  Most people aren't even aware that they see in 3D.  There needs to be something more to give that extra sense of excitement to a 3D picture...

Not that I reject orthostereoscopy.  It does have its place.  But I feel that creativity in the 3D medium requires us to go beyond what normal human vision can do.

One requirement of orthostereoscopy that is most often disregarded is the need for the FOV to attempt to match that of human vision.  This is what Eric Howlett was trying to do with his LEEP camera and viewer.  It did work because Eric found that if he used fisheye lenses on the camera as well as on the viewer, he could achieve wide angle orthostereoscopy.

This is what - several decades later -some companies are trying to do with VR 180.  It's possible thanks to digital technology that allows us to view a virtual image and to look at different parts of it by moving our head while wearing a headset.  It really is amazingly realistic.  But for me, it is one of many ways to approach 3D imaging.

With restricted FOV displays such as we now use - from glasses-free phones and tablets to large screen TVs - I embrace a more creative approach where interaxial choice is based on the subject and the effect rather than on a strict set of rules.

Francois

 

Eric was shooting for the Holy grail, he was way ahead of his
time.  But super wide viewing lenses have tremendous
limitations.  the few people I know who viewed through the
viewer stated the view problematic ...
I got to see the LEEP camera and viewer in person and was able to
look at one slide in the LEEP viewer. As I recall the image was
of a rainy day in Boston taken from the interior of a car looking
up and out the windshield. You could see the dashboard, the
street scene, buildings and sky. The realism of the image was
evocative and captivating.

Holy grail indeed.

Unfortunately that was the only slide I ever got to see in this
viewer.

...BC

timo@guildwood.net
 

I saw that image too. I think it was at the nsa convention in Grand Rapids.
It was spectacular.  A superb experience.  I hope one day we will have the capability to make such images again.

Timo

Sent from BlueMail

On Jan 14, 2020, at 7:23 PM, Bill Costa <bill.costa@...> wrote:
Eric was shooting for the Holy grail, he was way ahead of his
time.  But super wide viewing lenses have tremendous
limitations.  the few people I know who viewed through the
viewer stated the view problematic ...

I got to see the LEEP camera and viewer in person and was able to
look at one slide in the LEEP viewer. As I recall the image was
of a rainy day in Boston taken from the interior of a car looking
up and out the windshield. You could see the dashboard, the
street scene, buildings and sky. The realism of the image was
evocative and captivating.

Holy grail indeed.

Unfortunately that was the only slide I ever got to see in this
viewer.

...BC



depthcam
 

> Give a newb a MF viewer with ortho views, and the avg. person is freaked out by the realism of viewing a captured scene with life-like depth, that is the appeal.


For sure... IF the picture itself is appealing.  The problem I see and have seen for years has been for people to argue for orthostereoscopy and then take pictures of mostly flat scenes or where most of the subject matter is twenty feet or more away.  Even though we do perceive depth at such distances, the amount of deviation is minimal and the result is an image that doesn't look much different from a 2D image.


>  To the avg. person, depth is depth, it doesn't matter how it was attained, ie. what taking base, etc.


Exactly.


> But for the masses, when you try to trick what normal human vision can do, the viewing experience "can" become problematic.   Our brains only have ONE reference on how depth should appear.


David Burder did some tests on how people perceive interaxial many years ago and found that when they were asked which pictures looked "natural", they invariably chose the ones with a wider than normal interaxial.

I myself was surprised to find that some pictures I took that were shot at twice the normal interaxial did look full size.  Jacques Coté showed me some portraits he took for L'Oréal where the models looked larger than life and yet were shot with a 150mm interaxial and portrait lenses.  So the brain can easily be tricked.


> its accompanied by the annoying miniaturization effect.


What I find is that mostly purists in the 3D community are "annoyed" by seeing depth intensity where there should be none.  I haven't found anyone annoyed by it outside the community.  By the way, Pompey Mainardi - genius inventor of the Tri-Delta system - was also an avid fan of hyperstereo even though his own invention was designed with a 62.5mm "normal" interaxial !


> Many people I show wide base shots which miniaturize... their first response is, they break out laughing...


It depends how extreme the hyper effect is and what the subject is.  I actually very seldom shoot with a very wide interaxial myself but definitely often shoot with between a 100 to 200mm lens separation.


> the few people I know who viewed through the (LEEP) viewer stated the view problematic


Lucky for me, I don't need to depend on what other people claim from having at one time seen a single picture.  I have owned a LEEP viewer for over 35 years and have shot LEEP pictures for two weeks when the camera was loaned to me.  So I know exactly what this viewer can do.  And the effect is amazing.  The main two flaws are the low resolution due to the use of 400 ASA film (since the camera was fixed focus) and colour fringing due to low-cost uncorrected plastic lenses.


> which supports my findings in trying to design and produce super WA viewing lenses.


There was a radical difference between Eric's approach and yours.  Eric started out with fisheye distorted images and then used fisheye viewing lenses to re-establish the geometry.  You start with corrected shooting lenses and it becomes a much bigger challenge then to try and design viewer optics that won't distort your images.

My own goals were closer to yours in that I wanted to have wide angle camera lenses that did not have fisheye distortion.  That's what I didn't like about the LEEP camera.  Every picture was fisheye and could only be "decoded" in the LEEP viewer.  Back in the early nineties, I got involved in a project for a wide angle MF camera that had such lenses.  But then I discovered that trying to find appropriate orthostereoscopic viewing lenses for it was a nightmare - as you found out yourself.

Then I understood why Eric had taken that route.  He already knew that was the only way to have a wide angle orthostereoscopic system at a reasonable cost.


> Once they can produce higher resolution displays, such as 4k per eye


BTW, the Cinera uses two separate large rectangular displays 2.5k resolution each.  The result is impressive.


> This will allow low cost, and light weight lenses to be used with excellent IQ in the center portion only.


There is a LOT of research work going on at this moment and miniature displays (smaller than a penny) have been shown that have QHD resolution.  Several companies are now working on small high resolution VR "glasses" (as opposed to "headsets") So we are going to get there. (see attached)


> Its been my experience, without rules, problems begin to surface.


I fully agree.  Note that I wrote "a strict set of rules" - not "without rules".  What I mean here is that once one understands the mathematics of parallax (one of the first things I studied in the early eighties - I even wrote my own BASIC programs to calculate it), one can then play around with the variables while at the same time ensure that the results will be comfortable to view.


> In my dream world, ONE camera with super WA lenses, offering stills and video.


What you are describing is essentially a VR180 camera.  There are some fairly good ones out there.  But what I am waiting for is an 8K model.  It may be just around the corner.

Francois

depthcam
 

Looks like the attachments didn't make it.  Here they are again...

Francois

gl
 

On 13/01/2020 22:06, Bill Costa wrote:
But if somebody made a *really* good 3D camera and digital viewer
combination that achieved true ortho and only ortho, I'd
certainly buy it!  (Well, assuming I could afford it.)
I guess you'd want a camera that had 2 or more lenses, and could synthesize natural-looking IA's to suit the viewing conditions.  If that was all stored in the image metadata,  then digital viewer software could be written that automatically chose the correct ortha IA for any viewing situation.

Quite feasible technically, but unlikely to happen.
--
gl

bglick97@...
 

> Give a newb a MF viewer with ortho views, and the avg. person is freaked out by the realism of viewing a captured scene with life-like depth, that is the appeal.

For sure... IF the picture itself is appealing. 

         This has not been my experience at all.  I can show a newb a MF 3d pix of the inside of my garage, and they cant stop looking at it.  This assumes they have the 3d gene.  We all know, those who don't see, or appreciate depth, are not impressed.  


 The problem I see and have seen for years has been for people to argue for orthostereoscopy and then take pictures of mostly flat scenes or where most of the subject matter is twenty feet or more away.  Even though we do perceive depth at such distances, the amount of deviation is minimal and the result is an image that doesn't look much different from a 2D image.

             IF the resolution of the taking lenses, capture media and viewing system is sufficient, 20ft nears will produce the same depth effect in the viewer as it does in the real world.  The deviation will make it to the retina.  When shooting ortho, my IDEAL near distances were about 12ft.   But I have shot many nears at 40ft, and the depth effect is still overwhelming.  to transfer deviation, there can be no weak links in the chain to degrade the deviation  before it projects onto the retina.  Taking lenses, taking media, viewing optics, etc.  



David Burder did some tests on how people perceive interaxial many years ago and found that when they were asked which pictures looked "natural", they invariably chose the ones with a wider than normal interaxial.

                  This is the opposite of what my tests revealed... of course, there is so many variables not mentioned here... for example, if its a city scape, with nears at 5 miles, of course, hyper will seem more appealing.  As always, the devil is in the details, so hard to throw out blanket statements like this.

I myself was surprised to find that some pictures I took that were shot at twice the normal interaxial did look full size.  Jacques Coté showed me some portraits he took for L'Oréal where the models looked larger than life and yet were shot with a 150mm interaxial and portrait lenses.  So the brain can easily be tricked.

           Yes, the brain can be tricked... "within limits" and everyone seems to have different thresholds of where these limits are.  This is what makes non ortho so  difficult for sharing images...again, for personal consumption, anything goes.  


What I find is that mostly purists in the 3D community are "annoyed" by seeing depth intensity where there should be none.  I haven't found anyone annoyed by it outside the community.  By the way, Pompey Mainardi - genius inventor of the Tri-Delta system - was also an avid fan of hyperstereo even though his own invention was designed with a 62.5mm "normal" interaxial !

              Its NOT about seeing depth, its about completely un natural views, such as,  why do those trees look 2 inches tall??  


It depends how extreme the hyper effect is and what the subject is.  I actually very seldom shoot with a very wide interaxial myself but definitely often shoot with between a 100 to 200mm lens separation.

              Again, its not just the base, its the near and far distances... McKay wrote a good book on this issue, he studied it for years.  I followed many of his formulas.... they were well thought out, and were quite math intensive.  In the end, I abandoned this technique, as the result was so hit or miss... 1/5 images were good, 4/5 seemed un natural, an un desirable. 
                But again, details matter.  when shooting a subject such as a bird on branch, 300ft away, with the sky as the ONLY background... hyper worked remarkably well.  But these are rare scenes, i.e. short depth of field with no far or infinity....in this case,  our brains do not have all the variables to distort the image.

Lucky for me, I don't need to depend on what other people claim from having at one time seen a single picture.  I have owned a LEEP viewer for over 35 years and have shot LEEP pictures for two weeks when the camera was loaned to me.  So I know exactly what this viewer can do.  And the effect is amazing.  The main two flaws are the low resolution due to the use of 400 ASA film (since the camera was fixed focus) and colour fringing due to low-cost uncorrected plastic lenses.

       We all agree, wide AFOV of viewing adds tremendous WOW effect.  The reason the LEEP system, or even current VR does not become more mainstream (IMO), is because the IQ is poor.  OUr basis of IQ is what we see with the unaided eye, LEEP, VR falls waaaaay short of our basis.  As we all know, the makers of these products are in the process of trying to advance the IQ of these systems...everyone knows the weak link is  IQ.  Great for gaming, but just OK for "fine art" viewing.   

> which supports my findings in trying to design and produce super WA viewing lenses.

There was a radical difference between Eric's approach and yours.  Eric started out with fisheye distorted images and then used fisheye viewing lenses to re-establish the geometry.  You start with corrected shooting lenses and it becomes a much bigger challenge then to try and design viewer optics that won't distort your images.

                 Not sure how you knew all the research I did??  I do remember sharing a "few" things with you, but certainly not all.   One avenue I spent over a year researching, is to alter the optics designs, to reduce the design criteria and let distortion go.  Optical software can perfectly graph the optical distortion pattern on an X-Y graph.  I then anti-distorted the captured images digitally to match the lens distortion.  I shot brick walls to run these tests.  The results demonstrated just how complex optics design and execution is.  Without boring this list to tears, the short story of the findings was as follows.  The "eye box", i.e. the area of viewing, whereas the image center, lens center and eye lens center are all concentric, this goal "in theory" can be achieved. However, the tolerance levels of the eye box, were so small, it would NEVER be practically to keep all 3 of these variables concentric in the real world.  While this anti distortion would work perfect on my optical bench with tremendous precision in the alignment... it only took 1mm of physical movement of one of the variables creating non concentric alignment, and distortion returned.  In addition, there was another distortion variable that never was discussed in 3d viewing which I discovered, I coined the term,  distortion rivalry.  This occurs when the two sides have a different form of distortion.  Now the brain must contend with a new form of rivalry (distortion variance in the two views) which the brain does NOT contend with compared to our unaided vision.  This is another source of tremendous viewing stress, all deteriorating the viewing experience.  

I had the benefits of having access the best optical design software, optical labs in the USA... Eric was doing his work long before these sophisticated tools were available.  BTW, even with the anti distortion system, the requirements for MTF I kept high, which still forced the use of high min. 5 element lens design using high end glass and coatings.  Also, the WIDE AFOV, produces optics of very wide diameter, assuming sufficient ER, which is mandatory to cover those who wear specs.  To attain these views, it would be impossible using one, or a few plastic elements.  

This is why  in previous posts, I mentioned, it will take a massive breakthrough in optical design to overcome these limitations.  To make a high resolution optic, with a super wide AFOV, that is light weight, small, etc, would defy physics as we know it today.  Hence why I am hoping for a VR with eye tracking to simplify the optical requirements.  Or even more ideal IMO, is a mid range viewer to rid optics completely, or the holy grail IMO, 3dtv 8K.  Seems we are soooo close ;)

My own goals were closer to yours in that I wanted to have wide angle camera lenses that did not have fisheye distortion.  That's what I didn't like about the LEEP camera.  Every picture was fisheye and could only be "decoded" in the LEEP viewer.  Back in the early nineties, I got involved in a project for a wide angle MF camera that had such lenses.  But then I discovered that trying to find appropriate orthostereoscopic viewing lenses for it was a nightmare - as you found out yourself.

           I did accomplish this to a degree, 60 deg AFOV (not the 90 deg holy grail) with breathtaking MTF, no distortion, no color fringing,  etc.  But again the optics were the size of your fist, and weighed almost 2lbs each, and would cost about $2k each.  


> Once they can produce higher resolution displays, such as 4k per eye


BTW, the Cinera uses two separate large rectangular displays 2.5k resolution each.  The result is impressive.

          can u imagine the jump to 5K... but again, optics IMO will always be the weak link in the chain with these close viewing systems, vs. a non optical viewing system, till eye tracking is introduced...or a means to keep the eye looking straight forward ONLY.


> This will allow low cost, and light weight lenses to be used with excellent IQ in the center portion only.


There is a LOT of research work going on at this moment and miniature displays (smaller than a penny) have been shown that have QHD resolution.  Several companies are now working on small high resolution VR "glasses" (as opposed to "headsets") So we are going to get there. (see attached)

               AGreed, even with reduced AFOV, a pocket viewer that you can insert a memory card, would be another holy grail... i.e. the holy grail for 3d portability ;)

As for the issue of my holy grail of viewing and taking system.  Yes, there is a few 180 deg taking systems on the market that are good... but as I mentioned " a complete taking and viewing system" is my holy grail... their no viewing system to utilize these captures at a level sufficient to the taking system.  The market is much more advanced on the capture side, as it can steal technology from 2d capture, not true for viewing, hence why it will always be the weak link for a complete system, till a big maker designs with a taking and viewing system mindset from the start.  


depthcam
 

> IF the resolution of the taking lenses, capture media and viewing system is sufficient, 20ft nears will produce the same depth effect in the viewer as it does in the real world.


But my point is that this "real world" effect at that distance has so little deviation that, to most people, it is indistinguishable from a 2D picture of the same scene.  Again, stereo enthusiasts will look for the tiny bit of depth while most people will not.


> But I have shot many nears at 40ft, and the depth effect is still overwhelming.


I guess we have different perceptions of what constitutes an overwhelming picture.  It sounds more like your viewing system is what produces the appeal.  I remember you telling me that people were as impressed viewing a 2D picture in your viewer as viewing a 3D one.

> As always, the devil is in the details, so hard to throw out blanket statements like this.

Just reporting David Burder's research.  I personally never found anyone who could determine the interaxial set in a picture just from looking at it unless the base was massive.


> This is what makes non ortho so  difficult for sharing images...again, for personal consumption, anything goes.


Not a problem if you know your math.  it's all about presenting an amount of deviation to the eyes that is comfortable.  The rest is left to one's creativity.  I think where you and I differ is that you are trying to reproduce real world viewing.  I am not.  Most people wake up every day seeing in 3D and it's only when things look "different" that they take notice.  My sister lost sight in one eye a few years ago and one day she told me that her doctor had said she would no longer see in 3D.  She asked me what was that all about.  She argued she sees just the same with one eye as she did with two !  That shows you how much people actually notice depth in every day life.


> Its NOT about seeing depth, its about completely un natural views, such as,  why do those trees look 2 inches tall?? 


But that's exactly what I like.  Trying to move away from "normal" viewing.  Doing things with images that contradict reality. Not all the time, mind you.  But some of the time.  My own mother didn't care much for 3D until in 1986 I showed her some night shots of light paintings I had taken at Expo 86.  That's when she went "wow" !


> Again, its not just the base, its the near and far distances.


Of course.  That was my point about the math and the programs I wrote back then to calculate it. Pompey Mainardi was a math teacher and a specialist in 3D calculations.  He was my mentor.


> The reason the LEEP system, or even current VR does not become more mainstream (IMO), is because the IQ is poor.


That's one reason.  But the greater one is people don't like to strap on what feels like a large diving mask on their heads to view pictures !  Heck, they didn't even like having to wear "sunglasses" to watch TV !


>  Not sure how you knew all the research I did??  I do remember sharing a "few" things with you, but certainly not all.


You told me a lot more than you seem to remember -  including sending me a schematics of your incredibly complex viewer.  The point you made at the time was how very expensive such a viewer would be.  The point I made is that Eric Howlett found a way around that by using cheap uncorrected fisheye lenses both in the camera and the viewer.  It may not have been the best, but at least it showed what could be done and it was going to be accessible,  Unfortunately, he was a one-man operation with some help from friends but little funding.  Maybe something better could have been achieved, had a camera manufacturer taken over production.  But at least we got to see very early on what the potential was.

> "a complete taking and viewing system" is my holy grail...

I agree that the manufacturers who have made VR180 cameras so far have left the viewing choice to the individual.  That's probably to reduce costs and also because they know the buyers are for a large part already owners of a viewing device such as the Oculus Go.  Mind you, Lenovo did offer a Mirage headset to go with their Mirage camera.  However, most people just bought the camera, which cost less than the viewer !

Francois


bglick97@...
 

>  But my point is that this "real world" effect at that distance has so little deviation that, to most people, it is indistinguishable from a 2D picture of the same scene.  Again, stereo enthusiasts will look for the tiny bit of depth while most people will not.

            Again, at 20ft nears, we have completely different experiences...even 30ft nears to infinity have overwhelming depth... again, variables exist which make these discussions a bit senseless, acuity of our eyes, resolution of our taking lenses, resolution of our taking medium, resolution of viewing lenses, and the brightness of the lighting.  Its a complete optical chain, every variable matters, it only takes one weak link that provides a limitation to the entire chain.    In addition,   I developed a lighting system that was so bright, it  drove the pupil diameter down to its  min diam, 3-4mm on avg.   The only sharp portion of our eyes MTF curve is when the pupil is less than 4mm diam.   So hopefully we can agree, our experiences are different, because our taking and viewing equipment was prob. different, as well as, our visual acuity could be much different.    
           Since you are well versed at math, take the foveal resolution (similar to pixels / mm), run the math for deviation and determine how much deviation human eye spacing can deliver and be perceptible.  U would be amazed vs. the numbers you are tossing around.


>  I personally never found anyone who could determine the interaxial set in a picture just from looking at it unless the base was massive.

                    I know I never stated this.  Maybe someone else made this assertion, and you were commenting to them.  dunno...

>   She argued she sees just the same with one eye as she did with two !  That shows you how much people actually notice depth in every day life.
                    Lots of vision books available that explain this in detail... there is lot more 3d cues our brain has, not just deviation.  And yes, that is why sometimes I put a 2d view in my viewer and people can still sense depth.... again, this is superb imagery with massive backlighting... do they sense the same depth, of course not, but its still noticeable.  As I wrote on the forum many times, the greatest 3d viewing experience I ever had in my life was viewing a 2d video in a half dome.   This was the greatest demonstration I ever experienced proving the added cues our brain uses to sense depth.   But deviation is still one of the strongest cues available, but certainly not the only one.  

>  You told me a lot more than you seem to remember -  including sending me a schematics of your incredibly complex viewer.

              How would you know everything I was working on??   You must be telepathic.  Yes, I remember sending you a few ray traces , but you saw maybe 5% of the 3d projects I worked on over a 4 yr period...  I think I would know this better than you ;)

>   It may not have been the best, but at least it showed what could be done and it was going to be accessible, 

            I disagree with your assessment.  This is quite the stretch.  Its like stating, hey a ViewMaster is step one, this shows we will eventually be able to mimic human vision in future small light weight viewers.    In some fields, getting the last 10% of a design complete, never occurs despite millions invested and decades.  I know of a lot of optical products the military tried to design, whereas this was the case, and remains the case.  I am not knocking ERic in anyway, he was beyond a pioneer.  I have such respect for his diligence, motivation, drive, etc., specially with such limited resources.  I wish there was more Erics out there today!

As for the complete taking and viewing system...of course the cameras can be conquered, as its nothing more than 2d cameras synced.  IMO, if a killer, cost effective viewer system was to hit the market, we would see many camera makers jump in the market.   When you consider Facebook has dumped hundreds of millions (prob billions now)  in RnD in the Oculus, and the current evolution is how far its developed, this demonstrates just how complex close 3d viewing is.  I am grateful for these high tech companies, gamers, etc that drove the technology this far.   Just hope it continues....








On Wed, Jan 15, 2020 at 11:51 AM depthcam via Groups.Io <depthcam=yahoo.ca@groups.io> wrote:

> IF the resolution of the taking lenses, capture media and viewing system is sufficient, 20ft nears will produce the same depth effect in the viewer as it does in the real world.


But my point is that this "real world" effect at that distance has so little deviation that, to most people, it is indistinguishable from a 2D picture of the same scene.  Again, stereo enthusiasts will look for the tiny bit of depth while most people will not.


> But I have shot many nears at 40ft, and the depth effect is still overwhelming.


I guess we have different perceptions of what constitutes an overwhelming picture.  It sounds more like your viewing system is what produces the appeal.  I remember you telling me that people were as impressed viewing a 2D picture in your viewer as viewing a 3D one.

> As always, the devil is in the details, so hard to throw out blanket statements like this.

Just reporting David Burder's research.  I personally never found anyone who could determine the interaxial set in a picture just from looking at it unless the base was massive.


> This is what makes non ortho so  difficult for sharing images...again, for personal consumption, anything goes.


Not a problem if you know your math.  it's all about presenting an amount of deviation to the eyes that is comfortable.  The rest is left to one's creativity.  I think where you and I differ is that you are trying to reproduce real world viewing.  I am not.  Most people wake up every day seeing in 3D and it's only when things look "different" that they take notice.  My sister lost sight in one eye a few years ago and one day she told me that her doctor had said she would no longer see in 3D.  She asked me what was that all about.  She argued she sees just the same with one eye as she did with two !  That shows you how much people actually notice depth in every day life.


> Its NOT about seeing depth, its about completely un natural views, such as,  why do those trees look 2 inches tall?? 


But that's exactly what I like.  Trying to move away from "normal" viewing.  Doing things with images that contradict reality. Not all the time, mind you.  But some of the time.  My own mother didn't care much for 3D until in 1986 I showed her some night shots of light paintings I had taken at Expo 86.  That's when she went "wow" !


> Again, its not just the base, its the near and far distances.


Of course.  That was my point about the math and the programs I wrote back then to calculate it. Pompey Mainardi was a math teacher and a specialist in 3D calculations.  He was my mentor.


> The reason the LEEP system, or even current VR does not become more mainstream (IMO), is because the IQ is poor.


That's one reason.  But the greater one is people don't like to strap on what feels like a large diving mask on their heads to view pictures !  Heck, they didn't even like having to wear "sunglasses" to watch TV !


>  Not sure how you knew all the research I did??  I do remember sharing a "few" things with you, but certainly not all.


You told me a lot more than you seem to remember -  including sending me a schematics of your incredibly complex viewer.  The point you made at the time was how very expensive such a viewer would be.  The point I made is that Eric Howlett found a way around that by using cheap uncorrected fisheye lenses both in the camera and the viewer.  It may not have been the best, but at least it showed what could be done and it was going to be accessible,  Unfortunately, he was a one-man operation with some help from friends but little funding.  Maybe something better could have been achieved, had a camera manufacturer taken over production.  But at least we got to see very early on what the potential was.

> "a complete taking and viewing system" is my holy grail...

I agree that the manufacturers who have made VR180 cameras so far have left the viewing choice to the individual.  That's probably to reduce costs and also because they know the buyers are for a large part already owners of a viewing device such as the Oculus Go.  Mind you, Lenovo did offer a Mirage headset to go with their Mirage camera.  However, most people just bought the camera, which cost less than the viewer !

Francois


bglick97@...
 

In this thread, a poster was amazed at the ability to sense depth with only one eye... I had commented on the the many depth cues available to the brain... Just by chance, I happened to stumble across that page from a vision book...thought it would be of interest to list members...




On Wed, Jan 15, 2020 at 2:05 PM bglick97 via Groups.Io <bglick97=gmail.com@groups.io> wrote:
>  But my point is that this "real world" effect at that distance has so little deviation that, to most people, it is indistinguishable from a 2D picture of the same scene.  Again, stereo enthusiasts will look for the tiny bit of depth while most people will not.

            Again, at 20ft nears, we have completely different experiences...even 30ft nears to infinity have overwhelming depth... again, variables exist which make these discussions a bit senseless, acuity of our eyes, resolution of our taking lenses, resolution of our taking medium, resolution of viewing lenses, and the brightness of the lighting.  Its a complete optical chain, every variable matters, it only takes one weak link that provides a limitation to the entire chain.    In addition,   I developed a lighting system that was so bright, it  drove the pupil diameter down to its  min diam, 3-4mm on avg.   The only sharp portion of our eyes MTF curve is when the pupil is less than 4mm diam.   So hopefully we can agree, our experiences are different, because our taking and viewing equipment was prob. different, as well as, our visual acuity could be much different.    
           Since you are well versed at math, take the foveal resolution (similar to pixels / mm), run the math for deviation and determine how much deviation human eye spacing can deliver and be perceptible.  U would be amazed vs. the numbers you are tossing around.


>  I personally never found anyone who could determine the interaxial set in a picture just from looking at it unless the base was massive.

                    I know I never stated this.  Maybe someone else made this assertion, and you were commenting to them.  dunno...

>   She argued she sees just the same with one eye as she did with two !  That shows you how much people actually notice depth in every day life.
                    Lots of vision books available that explain this in detail... there is lot more 3d cues our brain has, not just deviation.  And yes, that is why sometimes I put a 2d view in my viewer and people can still sense depth.... again, this is superb imagery with massive backlighting... do they sense the same depth, of course not, but its still noticeable.  As I wrote on the forum many times, the greatest 3d viewing experience I ever had in my life was viewing a 2d video in a half dome.   This was the greatest demonstration I ever experienced proving the added cues our brain uses to sense depth.   But deviation is still one of the strongest cues available, but certainly not the only one.  

>  You told me a lot more than you seem to remember -  including sending me a schematics of your incredibly complex viewer.

              How would you know everything I was working on??   You must be telepathic.  Yes, I remember sending you a few ray traces , but you saw maybe 5% of the 3d projects I worked on over a 4 yr period...  I think I would know this better than you ;)

>   It may not have been the best, but at least it showed what could be done and it was going to be accessible, 

            I disagree with your assessment.  This is quite the stretch.  Its like stating, hey a ViewMaster is step one, this shows we will eventually be able to mimic human vision in future small light weight viewers.    In some fields, getting the last 10% of a design complete, never occurs despite millions invested and decades.  I know of a lot of optical products the military tried to design, whereas this was the case, and remains the case.  I am not knocking ERic in anyway, he was beyond a pioneer.  I have such respect for his diligence, motivation, drive, etc., specially with such limited resources.  I wish there was more Erics out there today!

As for the complete taking and viewing system...of course the cameras can be conquered, as its nothing more than 2d cameras synced.  IMO, if a killer, cost effective viewer system was to hit the market, we would see many camera makers jump in the market.   When you consider Facebook has dumped hundreds of millions (prob billions now)  in RnD in the Oculus, and the current evolution is how far its developed, this demonstrates just how complex close 3d viewing is.  I am grateful for these high tech companies, gamers, etc that drove the technology this far.   Just hope it continues....








On Wed, Jan 15, 2020 at 11:51 AM depthcam via Groups.Io <depthcam=yahoo.ca@groups.io> wrote:

> IF the resolution of the taking lenses, capture media and viewing system is sufficient, 20ft nears will produce the same depth effect in the viewer as it does in the real world.


But my point is that this "real world" effect at that distance has so little deviation that, to most people, it is indistinguishable from a 2D picture of the same scene.  Again, stereo enthusiasts will look for the tiny bit of depth while most people will not.


> But I have shot many nears at 40ft, and the depth effect is still overwhelming.


I guess we have different perceptions of what constitutes an overwhelming picture.  It sounds more like your viewing system is what produces the appeal.  I remember you telling me that people were as impressed viewing a 2D picture in your viewer as viewing a 3D one.

> As always, the devil is in the details, so hard to throw out blanket statements like this.

Just reporting David Burder's research.  I personally never found anyone who could determine the interaxial set in a picture just from looking at it unless the base was massive.


> This is what makes non ortho so  difficult for sharing images...again, for personal consumption, anything goes.


Not a problem if you know your math.  it's all about presenting an amount of deviation to the eyes that is comfortable.  The rest is left to one's creativity.  I think where you and I differ is that you are trying to reproduce real world viewing.  I am not.  Most people wake up every day seeing in 3D and it's only when things look "different" that they take notice.  My sister lost sight in one eye a few years ago and one day she told me that her doctor had said she would no longer see in 3D.  She asked me what was that all about.  She argued she sees just the same with one eye as she did with two !  That shows you how much people actually notice depth in every day life.


> Its NOT about seeing depth, its about completely un natural views, such as,  why do those trees look 2 inches tall?? 


But that's exactly what I like.  Trying to move away from "normal" viewing.  Doing things with images that contradict reality. Not all the time, mind you.  But some of the time.  My own mother didn't care much for 3D until in 1986 I showed her some night shots of light paintings I had taken at Expo 86.  That's when she went "wow" !


> Again, its not just the base, its the near and far distances.


Of course.  That was my point about the math and the programs I wrote back then to calculate it. Pompey Mainardi was a math teacher and a specialist in 3D calculations.  He was my mentor.


> The reason the LEEP system, or even current VR does not become more mainstream (IMO), is because the IQ is poor.


That's one reason.  But the greater one is people don't like to strap on what feels like a large diving mask on their heads to view pictures !  Heck, they didn't even like having to wear "sunglasses" to watch TV !


>  Not sure how you knew all the research I did??  I do remember sharing a "few" things with you, but certainly not all.


You told me a lot more than you seem to remember -  including sending me a schematics of your incredibly complex viewer.  The point you made at the time was how very expensive such a viewer would be.  The point I made is that Eric Howlett found a way around that by using cheap uncorrected fisheye lenses both in the camera and the viewer.  It may not have been the best, but at least it showed what could be done and it was going to be accessible,  Unfortunately, he was a one-man operation with some help from friends but little funding.  Maybe something better could have been achieved, had a camera manufacturer taken over production.  But at least we got to see very early on what the potential was.

> "a complete taking and viewing system" is my holy grail...

I agree that the manufacturers who have made VR180 cameras so far have left the viewing choice to the individual.  That's probably to reduce costs and also because they know the buyers are for a large part already owners of a viewing device such as the Oculus Go.  Mind you, Lenovo did offer a Mirage headset to go with their Mirage camera.  However, most people just bought the camera, which cost less than the viewer !

Francois


John Clement
 

When doing 2D to 3D conversions, the depth cues provided monoscipically make the conversion look better than it is.  The number of planes of depth do not have to be very large for a reasonable conversion.  To the average viewer good depth cues in a flat painting can sometimes fool them into thinking it is actually 3D.  True stereoscopic 3D can resolve objects that look like a flat jumble, so I love taking pictures of misty scenes.  Motion also provides intense cues.  Actually at high speeds stereopsis is negligible, so I understand a one eyed pilot can land a plane.  But two good functioning eyes are required for commercial pilots.

 

John M. Clement

 

From: main@Photo-3d.groups.io <main@Photo-3d.groups.io> On Behalf Of bglick97@...
Sent: Friday, January 17, 2020 4:25 PM
To: main@photo-3d.groups.io
Subject: Re: [Photo-3d] IPD vs Taking spacing. First true 3D camera announced for 2020

 

In this thread, a poster was amazed at the ability to sense depth with only one eye... I had commented on the the many depth cues available to the brain... Just by chance, I happened to stumble across that page from a vision book...thought it would be of interest to list members...

 

 

 

 

On Wed, Jan 15, 2020 at 2:05 PM bglick97 via Groups.Io <bglick97=gmail.com@groups.io> wrote:

>  But my point is that this "real world" effect at that distance has so little deviation that, to most people, it is indistinguishable from a 2D picture of the same scene.  Again, stereo enthusiasts will look for the tiny bit of depth while most people will not.

 

            Again, at 20ft nears, we have completely different experiences...even 30ft nears to infinity have overwhelming depth... again, variables exist which make these discussions a bit senseless, acuity of our eyes, resolution of our taking lenses, resolution of our taking medium, resolution of viewing lenses, and the brightness of the lighting.  Its a complete optical chain, every variable matters, it only takes one weak link that provides a limitation to the entire chain.    In addition,   I developed a lighting system that was so bright, it  drove the pupil diameter down to its  min diam, 3-4mm on avg.   The only sharp portion of our eyes MTF curve is when the pupil is less than 4mm diam.   So hopefully we can agree, our experiences are different, because our taking and viewing equipment was prob. different, as well as, our visual acuity could be much different.    

           Since you are well versed at math, take the foveal resolution (similar to pixels / mm), run the math for deviation and determine how much deviation human eye spacing can deliver and be perceptible.  U would be amazed vs. the numbers you are tossing around.

 

 

>  I personally never found anyone who could determine the interaxial set in a picture just from looking at it unless the base was massive.

 

                    I know I never stated this.  Maybe someone else made this assertion, and you were commenting to them.  dunno...

 

>   She argued she sees just the same with one eye as she did with two !  That shows you how much people actually notice depth in every day life.

                    Lots of vision books available that explain this in detail... there is lot more 3d cues our brain has, not just deviation.  And yes, that is why sometimes I put a 2d view in my viewer and people can still sense depth.... again, this is superb imagery with massive backlighting... do they sense the same depth, of course not, but its still noticeable.  As I wrote on the forum many times, the greatest 3d viewing experience I ever had in my life was viewing a 2d video in a half dome.   This was the greatest demonstration I ever experienced proving the added cues our brain uses to sense depth.   But deviation is still one of the strongest cues available, but certainly not the only one.  

 

>  You told me a lot more than you seem to remember -  including sending me a schematics of your incredibly complex viewer.

 

              How would you know everything I was working on??   You must be telepathic.  Yes, I remember sending you a few ray traces , but you saw maybe 5% of the 3d projects I worked on over a 4 yr period...  I think I would know this better than you ;)

 

>   It may not have been the best, but at least it showed what could be done and it was going to be accessible, 

 

            I disagree with your assessment.  This is quite the stretch.  Its like stating, hey a ViewMaster is step one, this shows we will eventually be able to mimic human vision in future small light weight viewers.    In some fields, getting the last 10% of a design complete, never occurs despite millions invested and decades.  I know of a lot of optical products the military tried to design, whereas this was the case, and remains the case.  I am not knocking ERic in anyway, he was beyond a pioneer.  I have such respect for his diligence, motivation, drive, etc., specially with such limited resources.  I wish there was more Erics out there today!

 

As for the complete taking and viewing system...of course the cameras can be conquered, as its nothing more than 2d cameras synced.  IMO, if a killer, cost effective viewer system was to hit the market, we would see many camera makers jump in the market.   When you consider Facebook has dumped hundreds of millions (prob billions now)  in RnD in the Oculus, and the current evolution is how far its developed, this demonstrates just how complex close 3d viewing is.  I am grateful for these high tech companies, gamers, etc that drove the technology this far.   Just hope it continues....

 

 

 

 

 

 

 

 

On Wed, Jan 15, 2020 at 11:51 AM depthcam via Groups.Io <depthcam=yahoo.ca@groups.io> wrote:

> IF the resolution of the taking lenses, capture media and viewing system is sufficient, 20ft nears will produce the same depth effect in the viewer as it does in the real world.


But my point is that this "real world" effect at that distance has so little deviation that, to most people, it is indistinguishable from a 2D picture of the same scene.  Again, stereo enthusiasts will look for the tiny bit of depth while most people will not.


> But I have shot many nears at 40ft, and the depth effect is still overwhelming.


I guess we have different perceptions of what constitutes an overwhelming picture.  It sounds more like your viewing system is what produces the appeal.  I remember you telling me that people were as impressed viewing a 2D picture in your viewer as viewing a 3D one.

> As always, the devil is in the details, so hard to throw out blanket statements like this.

Just reporting David Burder's research.  I personally never found anyone who could determine the interaxial set in a picture just from looking at it unless the base was massive.


> This is what makes non ortho so  difficult for sharing images...again, for personal consumption, anything goes.


Not a problem if you know your math.  it's all about presenting an amount of deviation to the eyes that is comfortable.  The rest is left to one's creativity.  I think where you and I differ is that you are trying to reproduce real world viewing.  I am not.  Most people wake up every day seeing in 3D and it's only when things look "different" that they take notice.  My sister lost sight in one eye a few years ago and one day she told me that her doctor had said she would no longer see in 3D.  She asked me what was that all about.  She argued she sees just the same with one eye as she did with two !  That shows you how much people actually notice depth in every day life.


> Its NOT about seeing depth, its about completely un natural views, such as,  why do those trees look 2 inches tall?? 


But that's exactly what I like.  Trying to move away from "normal" viewing.  Doing things with images that contradict reality. Not all the time, mind you.  But some of the time.  My own mother didn't care much for 3D until in 1986 I showed her some night shots of light paintings I had taken at Expo 86.  That's when she went "wow" !


> Again, its not just the base, its the near and far distances.


Of course.  That was my point about the math and the programs I wrote back then to calculate it. Pompey Mainardi was a math teacher and a specialist in 3D calculations.  He was my mentor.


> The reason the LEEP system, or even current VR does not become more mainstream (IMO), is because the IQ is poor.


That's one reason.  But the greater one is people don't like to strap on what feels like a large diving mask on their heads to view pictures !  Heck, they didn't even like having to wear "sunglasses" to watch TV !


>  Not sure how you knew all the research I did??  I do remember sharing a "few" things with you, but certainly not all.


You told me a lot more than you seem to remember -  including sending me a schematics of your incredibly complex viewer.  The point you made at the time was how very expensive such a viewer would be.  The point I made is that Eric Howlett found a way around that by using cheap uncorrected fisheye lenses both in the camera and the viewer.  It may not have been the best, but at least it showed what could be done and it was going to be accessible,  Unfortunately, he was a one-man operation with some help from friends but little funding.  Maybe something better could have been achieved, had a camera manufacturer taken over production.  But at least we got to see very early on what the potential was.

> "a complete taking and viewing system" is my holy grail...

I agree that the manufacturers who have made VR180 cameras so far have left the viewing choice to the individual.  That's probably to reduce costs and also because they know the buyers are for a large part already owners of a viewing device such as the Oculus Go.  Mind you, Lenovo did offer a Mirage headset to go with their Mirage camera.  However, most people just bought the camera, which cost less than the viewer !

Francois

depthcam
 

> In this thread, a poster was amazed at the ability to sense depth with only one eye...


It wasn't a poster.  It was a poster's sister ! ;-)


> I had commented on the the many depth cues available to the brain...


I am well aware of these and I had told her about them at the time.  Motion parallax is the strongest one.  Seeing with one eye is very different than looking at a 2D picture because a 2D picture has recorded only one perspective while a person with sight in only one eye still moves within a three-dimensional world where the perspective changes constantly as one moves about.  But the fact that she could not tell this from her seventy years as a two-eyed person surprised me.

Francois

bglick97@...
 

> In this thread, a poster was amazed at the ability to sense depth with only one eye...
It wasn't a poster.  It was a poster's sister ! ;-)

                 I only mentioned the poster commenting on the subject, never stated it was you ;)


> I had commented on the  many depth cues available to the brain...

I am well aware of these and I had told her about them at the time.

                   I realize you are aware of all things 3d, that's why I posted it for the benefits of "others" on the forum,  "thought it would be of interest to list members.."



On Fri, Jan 17, 2020 at 6:51 PM depthcam via Groups.Io <depthcam=yahoo.ca@groups.io> wrote:
> In this thread, a poster was amazed at the ability to sense depth with only one eye...


It wasn't a poster.  It was a poster's sister ! ;-)


> I had commented on the the many depth cues available to the brain...


I am well aware of these and I had told her about them at the time.  Motion parallax is the strongest one.  Seeing with one eye is very different than looking at a 2D picture because a 2D picture has recorded only one perspective while a person with sight in only one eye still moves within a three-dimensional world where the perspective changes constantly as one moves about.  But the fact that she could not tell this from her seventy years as a two-eyed person surprised me.

Francois

John Rupkalvis
 

Sensing depth and actually perceiving depth are two different things.  Recognizing that some object is closer than another from monoscopic depth cues such as occlusions or relative size will tell you that it is likely that one of the objects is closer than the other, but you still will not see actual depth with just one eye.  It will have to be stereoscopic seen with both eyes for actual depth perception.  

Classic artists used monoscopic depth cues such as perspective to indicate that one object in their painting was supposed to be closer than the other.  This did not, however, result in stereopsis.  The painting still appeared flat because, in reality, it was still 2D.  They had to carve a sculpture to actually appear in 3D (which has been done for thousands of years).  The ancient Greeks even had a word for it.  They said that they were "stereo", which in ancient Greek means "solid".  

Do not confuse monoscopic depth cues with stereoscopic depth cues.  Motion parallax (also called time parallax) is a stereoscopic depth cue, not monoscopic.  It requires both eyes to see stereoscopically.  One type of motion parallax is perceived with the Pulfrich effect.  The use of a dark filter in front of one eye will delay the amount of time required for the image to build up in that eye.  If the image or objects in that image are moving laterally, they will appear in a different position in the filtered eye than in the unfiltered eye due to the delay, thus creating the parallactic difference needed for true stereopsis.  Another way to create motion parallax is to take a panning or laterally shifting 2D image and make a duplicate, shifting it a few frames relative to the other one when editing.  With either method, not only is it necessary there is lateral motion, but each eye must see only the image intended for it.  It is the shift, not the motion, that creates the stereopsis.  If both eyes see both images without the shift, it will still be in 2D.  Thus, the so-called "wigglegrams" are NOT stereoscopic, but just jiggling flat animations still seen in 2D.

Someone mentioned a distance of 20 feet (about 6+ meters).  This is quite close in the visual field, and will yield a lot of stereoscopic depth.  Depth is perceptibly less at 200 feet (about 61 meters), but can still be easily seen with a 65mm interpupillary distance.  How do you eliminate the effect, if any, of monoscopic depth cues?  Easy.  You look at real objects at that distance (or any other distance that you want to test) normally with both eyes, then hold up your hand to cover one eye, and see what happens.  The image instantly flattens to 2D.  

John A. Rupkalvis
stereoscope3d@...

Picture


On Fri, Jan 17, 2020 at 6:51 PM depthcam via Groups.Io <depthcam=yahoo.ca@groups.io> wrote:
> In this thread, a poster was amazed at the ability to sense depth with only one eye...


It wasn't a poster.  It was a poster's sister ! ;-)


> I had commented on the the many depth cues available to the brain...


I am well aware of these and I had told her about them at the time.  Motion parallax is the strongest one.  Seeing with one eye is very different than looking at a 2D picture because a 2D picture has recorded only one perspective while a person with sight in only one eye still moves within a three-dimensional world where the perspective changes constantly as one moves about.  But the fact that she could not tell this from her seventy years as a two-eyed person surprised me.

Francois