Date   

Re: New 3D lens announced

JackDesBwa|3D
 

What I saw in the minimalist simulation was:
- when I move the source point toward the adapter, the two images of this point are going outward
- when I move the source point away from the adapter, the two images of this point are going inward (touching at the center at infinity)
- when I move the source point away left/right, the two images of this point are going left/right by the same amount each
- I had to add opaque planes to avoid most parasitic rays going directly from the source to the lens, but in my quick geometry some directions leak anyway
The general relative movements are correct for a regular 3D photo, but I was not able to figure out how the final image would look like.

JackDesBwa


Re: New 3D lens announced

Bob Aldridge
 

Yes. With a single lens the centre of each "bunch" of rays (left and right) MUST cross in the optical centre of the lens. If they don't they will be bent by the lens and destroy the image...

The unavoidable problem with this is - as has been stated - the bunches of rays Cannot be parallel.

Bob Aldridge

On 10/04/2021 21:18, Antonio F.G. via groups.io wrote:
On Fri, Apr 9, 2021 at 03:09 PM, Depthcam wrote:
> By the way, such mirror design could produce parallel shots if the angle of the mirrors were adjusted for that.
Actually, no.  This will only work if a  mirror adapter is mounted on a set of two lenses  - as is the case with the Leiz Stemar or the Zeiss Stereotar C.  When you mount a mirror adapter on a single lens and keep the optical axes parallel, each side is viewing one side of the scene - just as if there were no adapter. In order for a left and right image to be recorded of the same scene, the optical axes MUST be converged.
I was not sure about this, but nothing like making a drawing of a "perfectly" parallel mirror adapter, with mirrors exactly at 45°:

The rays from a point at infinity arrive in parallel to the mirrors, and they send them to the lens in parallel as well. Therefore the lens prints a single point in the sensor. This is exactly the same as if the mirrors were not there.

It seems that the way to get two separate L&R images of the point is to deviate the mirrors somewhat away from 45°.
But I am still not sure whether this implies necessarily convergence of the two optical axis (may need another drawing:-)

Regards
     Antonio


Re: New 3D lens announced

Antonio F.G.
 

On Fri, Apr 9, 2021 at 03:09 PM, Depthcam wrote:
> By the way, such mirror design could produce parallel shots if the angle of the mirrors were adjusted for that.
Actually, no.  This will only work if a  mirror adapter is mounted on a set of two lenses  - as is the case with the Leiz Stemar or the Zeiss Stereotar C.  When you mount a mirror adapter on a single lens and keep the optical axes parallel, each side is viewing one side of the scene - just as if there were no adapter. In order for a left and right image to be recorded of the same scene, the optical axes MUST be converged.
I was not sure about this, but nothing like making a drawing of a "perfectly" parallel mirror adapter, with mirrors exactly at 45°:

The rays from a point at infinity arrive in parallel to the mirrors, and they send them to the lens in parallel as well. Therefore the lens prints a single point in the sensor. This is exactly the same as if the mirrors were not there.

It seems that the way to get two separate L&R images of the point is to deviate the mirrors somewhat away from 45°.
But I am still not sure whether this implies necessarily convergence of the two optical axis (may need another drawing:-)

Regards
     Antonio


Re: New 3D lens announced

JackDesBwa|3D
 

Thanks for explaining how you can correct vertical disparities due to convergence of optical axes.

To complement the answer of Antonio, you might want to look at this montage explaining the principle visually (text complement in the description) with images taken with divergent optical axes (horizontal and vertical divergence) and rotation: https://stereopix.net/photo:koUNdrW2kF/

I guess SPM surely uses the same approach as StMani3

It is very likely, but then the feature matching (or the optimizer?) is not as good as the one you use in StMani3, because the keystone distortion is often noticeably sub-optimally corrected.

JackDesBwa


Re: New 3D lens announced

JackDesBwa|3D
 

I remain unconvinced that all photos taken with converging optical axes can be corrected.  The very simple reason for this is that if the convergence is at a subject at close range, the image recorded at far range may be completely different  due to the axes pointing at different parts of the scene that do not match.

In the situation you describe, I see the combination of two effects:
1) The keystone effect, due to the angle, which can be canceled out (with crop implications already mentioned)
2) The extremely excessive background disparity (going out of the image because the field of view is limited), due to the too large base, that cannot be corrected (at least not easily).
 
By the way, such mirror design could produce parallel shots if the angle of the mirrors were adjusted for that.
Actually, no.  This will only work if a  mirror adapter is mounted on a set of two lenses  - as is the case with the Leiz Stemar or the Zeiss Stereotar C.  When you mount a mirror adapter on a single lens and keep the optical axes parallel, each side is viewing one side of the scene - just as if there were no adapter. In order for a left and right image to be recorded of the same scene, the optical axes MUST be converged.

Could you detail how you come to this conclusion? I am not very good in optics (so my initial assumption might be wrong), but your version does not sound right to me.
I tried a quick and dirty simulation. It is very simplified, but it tends to confirm that the two situations are not equivalent. Maybe I missed something.
parallel_mirror_test.jpg

Also, I'd like to remind all that the subject of this thread is commercially available 3D lenses - not whether it is technically possible to correct distortions caused by poorly designed accessories.
My point is that if a product with inherent design flaws is put on the market, the buyers will mostly use them "as is" and that will result in images that cause eyestrain.

Sorry, but this sub-thread started because you suggested that stereo converters for single lenses shall be avoided because they introduce opposite keystone distortion with their design inherently bad which is a no-go. Several messages responded that well used, it could give interesting pictures.
Your arguments are not weak per se (people thinking or advertising that it gives good stereo without work is sad), but are weak in regard to the statement that the stereo converters for single lenses are inherently bad. I do not use them myself, but you would not convince me with this argument.

> You could also conclude that all lenses that exist are bad, because they introduce distortions
You seem to be missing the point that the cause of eyestrain in this particular case is OPPOSITE keystone distortion. It is the mismatch that causes the eyestrain - not the distortion itself.  If you take a picture of a building and point your camera up, you will also get keystone distortion but it will be the same in both the left and right images - therefore, comfortable to view.

I perfectly understand how these distortions behave. This particular line was to generalize your argument that using a device that has distortion to correct the distortion is very stupid, and show its weakness because we do it with lenses all the time. I thought about 2D photography, but actually the distortion of the lenses introduce mismatch and thus 3D discomfort as well.

JackDesBwa


Re: New 3D lens announced

Antonio F.G.
 

On Sat, Apr 10, 2021 at 04:41 AM, Olivier Cahen wrote:
Thanks for explaining how you can correct vertical disparities due to convergence of optical axes.
I wrote a paper to explain the alignment process used in the StMani3 program:
https://www.researchgate.net/publication/349634883_ALIGNMENT_OF_STEREO_DIGITAL_IMAGES

You can read just the first two pages that explain the approach: re-project the images of the unaligned stereo pair into a common virtual sensor plane. It explains there why any convergence angle can be corrected to null the vertical disparity, using just perspective transforms. You can skip the rest of the paper that tells the math process to find the virtual sensor plane.

Hey, only the vertical disparity can be corrected easily! The horizontal one is very other animal. I also talk about it in the document, and my best advise to correct a pair with the wrong horizontal disparity (either too much or too little), is to take your camera, go back to the place and shoot again taking care of the relationship between convergence, distance, stereo base, focal, et al:-)


For instance, it is not possible with StereoPhoto Maker.
I guess SPM surely uses the same approach as StMani3: Find the minimum error by successive approximations. (I do not know any other way). The approach requires to make an initial estimation of the solution, if the estimation is near the solution the process will quickly converge to the solution. If it is too far it will not converge, or converge a wrong solution.
SPM surely starts assuming the image pair is "reasonable", i.e. not excessive convergence or rotation. This may make it to fail if the initial convergence is too high. I guess it could perhaps be made to work by making an approximate manual alignment (the so-called "Easy Adjusment"), and a "Auto Alignment" afterwards (but I have not really tested this).

Have you tried Hugin? In spite it is not supposed to be made for stereo, JackDesBwa showed it is an extremely powerful aligning device, that includes at the same time stitching of several images into panoramas and lens correction (StMani3 does not deal with lenses nor panoramic's either). The biggest trouble is learning to use Hugin:-)

Regards
    Antonio


Re: New 3D lens announced

Olivier Cahen
 

Thanks for explaining how you can correct vertical disparities due to convergence of optical axes. For instance, it is not possible with StereoPhoto Maker.

Best regards, Olivier

Le 9 avr. 2021 à 22:56, Antonio F.G. via groups.io <afgalaz@...> a écrit :

On Fri, Apr 9, 2021 at 03:09 PM, Depthcam wrote:
 I remain unconvinced that all photos taken with converging optical axes can be corrected. 
What I say (and can prove) is that any optical convergence can be corrected to null the VERTICAL disparity.



The very simple reason for this is that if the convergence is at a subject at close range, the image recorded at far range may be completely different  due to the axes pointing at different parts of the scene that do not match.
You are talking now of HORIZONTAL disparity. Sure, if the horizontal disparity were much higher than the 1/30th rule, the pair would be un-viewable, regardless the vertical alignment. And this part can NOT be corrected, at least using simple perspective transforms. 



My point is that if a product with inherent design flaws is put on the market, the buyers will mostly use them "as is" and that will result in images that cause eyestrain.
I solemnly promise NEVER to put a mirror lens in the market:-)
But I would eagerly buy the Kúla Deeper if it were not discontinued. It is because I would like to give a stereo use to my Fuji X-M1 which is much much better camera than the NX1000's of my present rig.

Regards
     Antonio


Re: Standard Test Images Wanted #theory #viewing #vrheadset

Bill Costa as just a member
 

You're welcome to take anything from from here that may be of use.  ...BC


On Fri, Apr 9, 2021 at 7:52 PM Jay Kusnetz <jay31415@...> wrote:

I am volunteering to organize a resource for the community; a collection of stereoscopic images that could be used as standard test images. Basically our version of "Lena" or a "china doll" and a version similar to DSC's test film such as http://dsclabs.com/specialist-and-skin-tone-charts/
Would be good to also have a variety of subject matter, and especially textures that could show off the resolution of the hardware display, AND the processing pipeline.

Unfortunately, I don't have high end cameras, just a W3 and an old Nikon dslr, so I can't shoot anything high enough res and quality.

I can host them on the ggstereo.org site, and Internet Archive. Looking for suggestions for other repositories.

Initial use will be in an AltspaceVR world that will have the various measurements as part of the world (ie, 2 meter, 4 meter, and 6 meter floor marking in front of a 1 meter square picture)

Images should be either in the public domain, or https://creativecommons.org/licenses/ so that they can be freely used.
Please let me know if you have any images you can contribute.



--
Bill.Costa@...
+1.603.435.8526
https://mypages.unh.edu/wfc
No good deed goes unpunished.


Standard Test Images Wanted #theory #viewing #vrheadset

Jay Kusnetz
 

I am volunteering to organize a resource for the community; a collection of stereoscopic images that could be used as standard test images. Basically our version of "Lena" or a "china doll" and a version similar to DSC's test film such as http://dsclabs.com/specialist-and-skin-tone-charts/
Would be good to also have a variety of subject matter, and especially textures that could show off the resolution of the hardware display, AND the processing pipeline.

Unfortunately, I don't have high end cameras, just a W3 and an old Nikon dslr, so I can't shoot anything high enough res and quality.

I can host them on the ggstereo.org site, and Internet Archive. Looking for suggestions for other repositories.

Initial use will be in an AltspaceVR world that will have the various measurements as part of the world (ie, 2 meter, 4 meter, and 6 meter floor marking in front of a 1 meter square picture)

Images should be either in the public domain, or https://creativecommons.org/licenses/ so that they can be freely used.
Please let me know if you have any images you can contribute.


Re: New 3D lens announced

gl
 


I think it's really useful to highlight the issues with these types of adapters.  but it's also true that every stereo capture method (at least that most of us can afford) requires post processing of some kind to get the best from them.

What _is_ visually fool-proof at the consumer/prosumer level?  even 'easy to use' depth map images from phones are full of artifacts (just different ones), and each type of artifact compromises the viewing experience unless improved somehow.

what's interesting about stereo is how crucial those corrections are.  bad 2D photos may suck, but nobody would say that all 2D photography is bad just because there are badly shot or processed photos out there.  we can live with all kinds of 2D distortions.  but if you're gonna create stereo content, you're kinda forced to apply corrections unless you want to turn people off.

she's a harsh mistress ...
--
gl


On 09/04/2021 21:56, Antonio F.G. via groups.io wrote:
On Fri, Apr 9, 2021 at 03:09 PM, Depthcam wrote:
 I remain unconvinced that all photos taken with converging optical axes can be corrected. 
What I say (and can prove) is that any optical convergence can be corrected to null the VERTICAL disparity.



The very simple reason for this is that if the convergence is at a subject at close range, the image recorded at far range may be completely different  due to the axes pointing at different parts of the scene that do not match.
You are talking now of HORIZONTAL disparity. Sure, if the horizontal disparity were much higher than the 1/30th rule, the pair would be un-viewable, regardless the vertical alignment. And this part can NOT be corrected, at least using simple perspective transforms. 



My point is that if a product with inherent design flaws is put on the market, the buyers will mostly use them "as is" and that will result in images that cause eyestrain.
I solemnly promise NEVER to put a mirror lens in the market:-)
But I would eagerly buy the Kúla Deeper if it were not discontinued. It is because I would like to give a stereo use to my Fuji X-M1 which is much much better camera than the NX1000's of my present rig.

Regards
     Antonio


Re: New 3D lens announced

Antonio F.G.
 

On Fri, Apr 9, 2021 at 03:09 PM, Depthcam wrote:
 I remain unconvinced that all photos taken with converging optical axes can be corrected. 
What I say (and can prove) is that any optical convergence can be corrected to null the VERTICAL disparity.



The very simple reason for this is that if the convergence is at a subject at close range, the image recorded at far range may be completely different  due to the axes pointing at different parts of the scene that do not match.
You are talking now of HORIZONTAL disparity. Sure, if the horizontal disparity were much higher than the 1/30th rule, the pair would be un-viewable, regardless the vertical alignment. And this part can NOT be corrected, at least using simple perspective transforms. 



My point is that if a product with inherent design flaws is put on the market, the buyers will mostly use them "as is" and that will result in images that cause eyestrain.
I solemnly promise NEVER to put a mirror lens in the market:-)
But I would eagerly buy the Kúla Deeper if it were not discontinued. It is because I would like to give a stereo use to my Fuji X-M1 which is much much better camera than the NX1000's of my present rig.

Regards
     Antonio


Re: New 3D lens announced

Depthcam
 

> - Antonio reacted to the too common myth that keystone cannot be corrected


 I remain unconvinced that all photos taken with converging optical axes can be corrected.  The very simple reason for this is that if the convergence is at a subject at close range, the image recorded at far range may be completely different  due to the axes pointing at different parts of the scene that do not match.

Also, I'd like to remind all that the subject of this thread is commercially available 3D lenses - not whether it is technically possible to correct distortions caused by poorly designed accessories.

My point is that if a product with inherent design flaws is put on the market, the buyers will mostly use them "as is" and that will result in images that cause eyestrain.


> By the way, such mirror design could produce parallel shots if the angle of the mirrors were adjusted for that.


Actually, no.  This will only work if a  mirror adapter is mounted on a set of two lenses  - as is the case with the Leiz Stemar or the Zeiss Stereotar C.  When you mount a mirror adapter on a single lens and keep the optical axes parallel, each side is viewing one side of the scene - just as if there were no adapter. In order for a left and right image to be recorded of the same scene, the optical axes MUST be converged.


> With the same weak argument, you could conclude that parallel dual cameras are bad devices because published as-is the images are likely badly aligned, with bad window placement, with window violation, possibly with lens distortion, color mismatch...


a) Sorry, but the argument is not weak.  It is the result of viewing decades of distorted eye-straining images created with such devices - that are marketed as "a simple way to get stunning 3D images".  Also keep in mind that, for the very many decades when those devices were widely marketed, there was no way to correct for the inherent opposite keystone distortion.

b) Do not confuse home-made stereo rigs with commercially available products.  Slight vertical misalignment can occur with commercially produced stereo cameras but it seldom causes the strong eye-strain that single-lens SBS 3D converters produce by design.


> You could also conclude that all lenses that exist are bad, because they introduce distortions


You seem to be missing the point that the cause of eyestrain in this particular case is OPPOSITE keystone distortion. It is the mismatch that causes the eyestrain - not the distortion itself.  If you take a picture of a building and point your camera up, you will also get keystone distortion but it will be the same in both the left and right images - therefore, comfortable to view.


>
the fact that the cameras or computer software correct them is not a reason to use such lenses in the first place.


Again, you are missing the point that converters such as the Kula Deeper and all its predecessors are marketed as devices that produce "perfect 3D out of the box".  The fact that stereo enthusiasts may recognize the inherent distortions they cause and be able to correct some of the distortions they produce is not very relevant because stereo enthusiasts are a minority.  Those adapters are marketed to average users that, for the most part, know nothing about 3D.  As I pointed out before, pictures taken with the Kula Deeper show up on social media "as is" - with no correction - and they are eye-straining to view.  Even Kula posted uncorrected eye-straining images on their website as examples of the "good 3D" their device produces.
 
If stereo products are to be commercially marketed, they should be designed in such a way that they produce pleasant results even for people that have no knowledge or understanding of 3D.

Francois


Re: Photographer and Designer Builds 3D Printed Stereoscopic ‘Wiggle Lens’

Depthcam
 

Michael already posted a link about this in another thread.  However, neither in his link nor on the photographer's site was it mentioned that the APS-C version only has two lenses - making it essentially a homemade version of the Lumix 3D lens !

For an acceptable wiggle, it's best to have at least three lenses and even then, when fitting three lenses onto a single lens mount, the interaxial ends up pretty small. Therefore the effect only works well at close range.  And even then, the images end up pretty narrow.

But the two-lens version for APS-C, well, I think i'd choose an original Lumix 3D lens over a 3D printed homemade one - even though his lenses might be set a bit wider apart to accommodate the slightly larger sensor (The Lumix lens is optimized for an M43 sensor.)

For the three-lens model, one first needs to get a full-frame DSLR...

I think I'll pass.

But you gotta admit the three-lens model does look pretty groovy !

Francois


Re: New 3D lens announced

JackDesBwa|3D
 

I was thinking that the keystone correcting software was working similar to the perspective control software used to correct the extreme perspective in photographs taken with wide angle lenses with inclined angles with respect to the to the surface of the object.

The phenomenon is exactly the same: a projection on a plan that is not parallel to the subject [successive depth planes in case of stereo].
The ideal lens is also the same: a shift lens to keep the sensor parallel while getting rays coming from an angled direction.
In modeling software, such shift lens is used for the stereo cameras, because it is cheap to build in software contrary to the real world one and allows to set the base and window independently without requiring post-treatment.

JackDesBwa


Re: New 3D lens announced

Oktay
 

Thanks for the comprehensive explanation.

I was thinking that the keystone correcting software was working similar to the perspective control software used to correct the extreme perspective in photographs taken with wide angle lenses with inclined angles with respect to the to the surface of the object.

Oktay


Re: New 3D lens announced

JackDesBwa|3D
 


Does the resolution of the right side image gradually decrease from the right edge of the image to the left edge of the image when correcting keystone distortions? (Same question for the left side image of course)
Or is the resolution or the number of pixels distributed homogenously all over the image area?

The general principle is that the software use a mathematical formula to associate a coordinate in the source image to each pixel of the destination image. For the keystone correction, it is a simple affine transform, but it could be a more complex formula to correct lens distortion for example (or a combination of lens & keystone distortions, and so on...). It could even be a different formula per color channel, for example to correct chromatic aberrations.

transform.jpg
Examples of transforms with this method: Top-left: original; Top-right: linear transform (3×3 matrix); bottom-left: quadratic transform; bottom-right: different translation per color channel.

Of course, there is almost no chance that the computed coordinate will be a whole number, which means that the destination pixel will come from a place "in between" several pixels in the source image. To determine the actual value, the software will use an interpolation function, which will estimate the intermediate value based on more or less neighbors depending on the interpolation method.

If the transition between the pixels is regular enough (in regard to the interpolation method), the recreated value will be very close to the actual value there. Of course, with extreme transforms where the formula determines that a lot of pixels of the destination come from the same interval of pixels in the source image, the algorithm will not have enough sampling points to recreate a pertinent value and the destination will look smoothed, which is probably what you call a decrease of resolution (there are evenly distributed new pixels, but their values are determined by less sensor samples). You can compare the areas of the source and destination images to have an idea of how the density of samples is distributed, although the actual resolution increment or decrement will also depend on the final size of the destination image. I hope this answers you questions, because I am not sure how it should be understood.

Here is how the image is deformed with the keystone correction in left/right direction.
Hoping that the image is not compressed by the mailing system, you can zoom on the image.
keystone.jpg
With small angles, the deformation is quite minimal so that we do not have to worry about a visual degradation (but it is enough to get improvement in stereo comfort)
Even with larger angles, used when preparing phantograms for example, the resulting image generally looks good. This trick to process the images work really well.

JackDesBwa


Back At The Golf Course Ready To Chat

 

I'm back at the golf course ready to chat, kids!

https://youtu.be/WB9yDrpYN7Q


Re: New 3D lens announced

Oktay
 

On Thu, Apr 8, 2021 at 03:25 AM, Antonio F.G. wrote:
>>I agree these mirror lenses are very effective head-ache makers if sold without correcting software.<<
I have very little computing skills, so I have to ask a question about these correcting softwares:

Does the resolution of the right side image gradually decrease from the right edge of the image to the left edge of the image when correcting keystone distortions? (Same question for the left side image of course)
Or is the resolution or the number of pixels distributed homogenously all over the image area?

Oktay


Re: Ingenuity on Mars in 3D #stereopix

JackDesBwa|3D
 

Here is my updated phantogram: https://stereopix.net/photo:koUNd1puWc/

JackDesBwa


Re: Ingenuity on Mars in 3D #stereopix

KenK
 

Yes! And the image on Mission sol 45 is a good example of the benefit of the stereopix viewer. You can "have it your way" (anaglyph vs SBS vs etc...).
https://mars.stereopix.net/

1921 - 1940 of 131177