Date   

Re: New 3D lens announced

Antonio F.G.
 

On Sat, Apr 10, 2021 at 04:41 AM, Olivier Cahen wrote:
Thanks for explaining how you can correct vertical disparities due to convergence of optical axes.
I wrote a paper to explain the alignment process used in the StMani3 program:
https://www.researchgate.net/publication/349634883_ALIGNMENT_OF_STEREO_DIGITAL_IMAGES

You can read just the first two pages that explain the approach: re-project the images of the unaligned stereo pair into a common virtual sensor plane. It explains there why any convergence angle can be corrected to null the vertical disparity, using just perspective transforms. You can skip the rest of the paper that tells the math process to find the virtual sensor plane.

Hey, only the vertical disparity can be corrected easily! The horizontal one is very other animal. I also talk about it in the document, and my best advise to correct a pair with the wrong horizontal disparity (either too much or too little), is to take your camera, go back to the place and shoot again taking care of the relationship between convergence, distance, stereo base, focal, et al:-)


For instance, it is not possible with StereoPhoto Maker.
I guess SPM surely uses the same approach as StMani3: Find the minimum error by successive approximations. (I do not know any other way). The approach requires to make an initial estimation of the solution, if the estimation is near the solution the process will quickly converge to the solution. If it is too far it will not converge, or converge a wrong solution.
SPM surely starts assuming the image pair is "reasonable", i.e. not excessive convergence or rotation. This may make it to fail if the initial convergence is too high. I guess it could perhaps be made to work by making an approximate manual alignment (the so-called "Easy Adjusment"), and a "Auto Alignment" afterwards (but I have not really tested this).

Have you tried Hugin? In spite it is not supposed to be made for stereo, JackDesBwa showed it is an extremely powerful aligning device, that includes at the same time stitching of several images into panoramas and lens correction (StMani3 does not deal with lenses nor panoramic's either). The biggest trouble is learning to use Hugin:-)

Regards
    Antonio


Re: New 3D lens announced

Olivier Cahen
 

Thanks for explaining how you can correct vertical disparities due to convergence of optical axes. For instance, it is not possible with StereoPhoto Maker.

Best regards, Olivier

Le 9 avr. 2021 à 22:56, Antonio F.G. via groups.io <afgalaz@...> a écrit :

On Fri, Apr 9, 2021 at 03:09 PM, Depthcam wrote:
 I remain unconvinced that all photos taken with converging optical axes can be corrected. 
What I say (and can prove) is that any optical convergence can be corrected to null the VERTICAL disparity.



The very simple reason for this is that if the convergence is at a subject at close range, the image recorded at far range may be completely different  due to the axes pointing at different parts of the scene that do not match.
You are talking now of HORIZONTAL disparity. Sure, if the horizontal disparity were much higher than the 1/30th rule, the pair would be un-viewable, regardless the vertical alignment. And this part can NOT be corrected, at least using simple perspective transforms. 



My point is that if a product with inherent design flaws is put on the market, the buyers will mostly use them "as is" and that will result in images that cause eyestrain.
I solemnly promise NEVER to put a mirror lens in the market:-)
But I would eagerly buy the Kúla Deeper if it were not discontinued. It is because I would like to give a stereo use to my Fuji X-M1 which is much much better camera than the NX1000's of my present rig.

Regards
     Antonio


Re: Standard Test Images Wanted #theory #viewing #vrheadset

Bill Costa as just a member
 

You're welcome to take anything from from here that may be of use.  ...BC


On Fri, Apr 9, 2021 at 7:52 PM Jay Kusnetz <jay31415@...> wrote:

I am volunteering to organize a resource for the community; a collection of stereoscopic images that could be used as standard test images. Basically our version of "Lena" or a "china doll" and a version similar to DSC's test film such as http://dsclabs.com/specialist-and-skin-tone-charts/
Would be good to also have a variety of subject matter, and especially textures that could show off the resolution of the hardware display, AND the processing pipeline.

Unfortunately, I don't have high end cameras, just a W3 and an old Nikon dslr, so I can't shoot anything high enough res and quality.

I can host them on the ggstereo.org site, and Internet Archive. Looking for suggestions for other repositories.

Initial use will be in an AltspaceVR world that will have the various measurements as part of the world (ie, 2 meter, 4 meter, and 6 meter floor marking in front of a 1 meter square picture)

Images should be either in the public domain, or https://creativecommons.org/licenses/ so that they can be freely used.
Please let me know if you have any images you can contribute.



--
Bill.Costa@...
+1.603.435.8526
https://mypages.unh.edu/wfc
No good deed goes unpunished.


Standard Test Images Wanted #theory #viewing #vrheadset

Jay Kusnetz
 

I am volunteering to organize a resource for the community; a collection of stereoscopic images that could be used as standard test images. Basically our version of "Lena" or a "china doll" and a version similar to DSC's test film such as http://dsclabs.com/specialist-and-skin-tone-charts/
Would be good to also have a variety of subject matter, and especially textures that could show off the resolution of the hardware display, AND the processing pipeline.

Unfortunately, I don't have high end cameras, just a W3 and an old Nikon dslr, so I can't shoot anything high enough res and quality.

I can host them on the ggstereo.org site, and Internet Archive. Looking for suggestions for other repositories.

Initial use will be in an AltspaceVR world that will have the various measurements as part of the world (ie, 2 meter, 4 meter, and 6 meter floor marking in front of a 1 meter square picture)

Images should be either in the public domain, or https://creativecommons.org/licenses/ so that they can be freely used.
Please let me know if you have any images you can contribute.


Re: New 3D lens announced

gl
 


I think it's really useful to highlight the issues with these types of adapters.  but it's also true that every stereo capture method (at least that most of us can afford) requires post processing of some kind to get the best from them.

What _is_ visually fool-proof at the consumer/prosumer level?  even 'easy to use' depth map images from phones are full of artifacts (just different ones), and each type of artifact compromises the viewing experience unless improved somehow.

what's interesting about stereo is how crucial those corrections are.  bad 2D photos may suck, but nobody would say that all 2D photography is bad just because there are badly shot or processed photos out there.  we can live with all kinds of 2D distortions.  but if you're gonna create stereo content, you're kinda forced to apply corrections unless you want to turn people off.

she's a harsh mistress ...
--
gl


On 09/04/2021 21:56, Antonio F.G. via groups.io wrote:
On Fri, Apr 9, 2021 at 03:09 PM, Depthcam wrote:
 I remain unconvinced that all photos taken with converging optical axes can be corrected. 
What I say (and can prove) is that any optical convergence can be corrected to null the VERTICAL disparity.



The very simple reason for this is that if the convergence is at a subject at close range, the image recorded at far range may be completely different  due to the axes pointing at different parts of the scene that do not match.
You are talking now of HORIZONTAL disparity. Sure, if the horizontal disparity were much higher than the 1/30th rule, the pair would be un-viewable, regardless the vertical alignment. And this part can NOT be corrected, at least using simple perspective transforms. 



My point is that if a product with inherent design flaws is put on the market, the buyers will mostly use them "as is" and that will result in images that cause eyestrain.
I solemnly promise NEVER to put a mirror lens in the market:-)
But I would eagerly buy the Kúla Deeper if it were not discontinued. It is because I would like to give a stereo use to my Fuji X-M1 which is much much better camera than the NX1000's of my present rig.

Regards
     Antonio


Re: New 3D lens announced

Antonio F.G.
 

On Fri, Apr 9, 2021 at 03:09 PM, Depthcam wrote:
 I remain unconvinced that all photos taken with converging optical axes can be corrected. 
What I say (and can prove) is that any optical convergence can be corrected to null the VERTICAL disparity.



The very simple reason for this is that if the convergence is at a subject at close range, the image recorded at far range may be completely different  due to the axes pointing at different parts of the scene that do not match.
You are talking now of HORIZONTAL disparity. Sure, if the horizontal disparity were much higher than the 1/30th rule, the pair would be un-viewable, regardless the vertical alignment. And this part can NOT be corrected, at least using simple perspective transforms. 



My point is that if a product with inherent design flaws is put on the market, the buyers will mostly use them "as is" and that will result in images that cause eyestrain.
I solemnly promise NEVER to put a mirror lens in the market:-)
But I would eagerly buy the Kúla Deeper if it were not discontinued. It is because I would like to give a stereo use to my Fuji X-M1 which is much much better camera than the NX1000's of my present rig.

Regards
     Antonio


Re: New 3D lens announced

Depthcam
 

> - Antonio reacted to the too common myth that keystone cannot be corrected


 I remain unconvinced that all photos taken with converging optical axes can be corrected.  The very simple reason for this is that if the convergence is at a subject at close range, the image recorded at far range may be completely different  due to the axes pointing at different parts of the scene that do not match.

Also, I'd like to remind all that the subject of this thread is commercially available 3D lenses - not whether it is technically possible to correct distortions caused by poorly designed accessories.

My point is that if a product with inherent design flaws is put on the market, the buyers will mostly use them "as is" and that will result in images that cause eyestrain.


> By the way, such mirror design could produce parallel shots if the angle of the mirrors were adjusted for that.


Actually, no.  This will only work if a  mirror adapter is mounted on a set of two lenses  - as is the case with the Leiz Stemar or the Zeiss Stereotar C.  When you mount a mirror adapter on a single lens and keep the optical axes parallel, each side is viewing one side of the scene - just as if there were no adapter. In order for a left and right image to be recorded of the same scene, the optical axes MUST be converged.


> With the same weak argument, you could conclude that parallel dual cameras are bad devices because published as-is the images are likely badly aligned, with bad window placement, with window violation, possibly with lens distortion, color mismatch...


a) Sorry, but the argument is not weak.  It is the result of viewing decades of distorted eye-straining images created with such devices - that are marketed as "a simple way to get stunning 3D images".  Also keep in mind that, for the very many decades when those devices were widely marketed, there was no way to correct for the inherent opposite keystone distortion.

b) Do not confuse home-made stereo rigs with commercially available products.  Slight vertical misalignment can occur with commercially produced stereo cameras but it seldom causes the strong eye-strain that single-lens SBS 3D converters produce by design.


> You could also conclude that all lenses that exist are bad, because they introduce distortions


You seem to be missing the point that the cause of eyestrain in this particular case is OPPOSITE keystone distortion. It is the mismatch that causes the eyestrain - not the distortion itself.  If you take a picture of a building and point your camera up, you will also get keystone distortion but it will be the same in both the left and right images - therefore, comfortable to view.


>
the fact that the cameras or computer software correct them is not a reason to use such lenses in the first place.


Again, you are missing the point that converters such as the Kula Deeper and all its predecessors are marketed as devices that produce "perfect 3D out of the box".  The fact that stereo enthusiasts may recognize the inherent distortions they cause and be able to correct some of the distortions they produce is not very relevant because stereo enthusiasts are a minority.  Those adapters are marketed to average users that, for the most part, know nothing about 3D.  As I pointed out before, pictures taken with the Kula Deeper show up on social media "as is" - with no correction - and they are eye-straining to view.  Even Kula posted uncorrected eye-straining images on their website as examples of the "good 3D" their device produces.
 
If stereo products are to be commercially marketed, they should be designed in such a way that they produce pleasant results even for people that have no knowledge or understanding of 3D.

Francois


Re: Photographer and Designer Builds 3D Printed Stereoscopic ‘Wiggle Lens’

Depthcam
 

Michael already posted a link about this in another thread.  However, neither in his link nor on the photographer's site was it mentioned that the APS-C version only has two lenses - making it essentially a homemade version of the Lumix 3D lens !

For an acceptable wiggle, it's best to have at least three lenses and even then, when fitting three lenses onto a single lens mount, the interaxial ends up pretty small. Therefore the effect only works well at close range.  And even then, the images end up pretty narrow.

But the two-lens version for APS-C, well, I think i'd choose an original Lumix 3D lens over a 3D printed homemade one - even though his lenses might be set a bit wider apart to accommodate the slightly larger sensor (The Lumix lens is optimized for an M43 sensor.)

For the three-lens model, one first needs to get a full-frame DSLR...

I think I'll pass.

But you gotta admit the three-lens model does look pretty groovy !

Francois


Re: New 3D lens announced

JackDesBwa|3D
 

I was thinking that the keystone correcting software was working similar to the perspective control software used to correct the extreme perspective in photographs taken with wide angle lenses with inclined angles with respect to the to the surface of the object.

The phenomenon is exactly the same: a projection on a plan that is not parallel to the subject [successive depth planes in case of stereo].
The ideal lens is also the same: a shift lens to keep the sensor parallel while getting rays coming from an angled direction.
In modeling software, such shift lens is used for the stereo cameras, because it is cheap to build in software contrary to the real world one and allows to set the base and window independently without requiring post-treatment.

JackDesBwa


Re: New 3D lens announced

Oktay
 

Thanks for the comprehensive explanation.

I was thinking that the keystone correcting software was working similar to the perspective control software used to correct the extreme perspective in photographs taken with wide angle lenses with inclined angles with respect to the to the surface of the object.

Oktay


Re: New 3D lens announced

JackDesBwa|3D
 


Does the resolution of the right side image gradually decrease from the right edge of the image to the left edge of the image when correcting keystone distortions? (Same question for the left side image of course)
Or is the resolution or the number of pixels distributed homogenously all over the image area?

The general principle is that the software use a mathematical formula to associate a coordinate in the source image to each pixel of the destination image. For the keystone correction, it is a simple affine transform, but it could be a more complex formula to correct lens distortion for example (or a combination of lens & keystone distortions, and so on...). It could even be a different formula per color channel, for example to correct chromatic aberrations.

transform.jpg
Examples of transforms with this method: Top-left: original; Top-right: linear transform (3×3 matrix); bottom-left: quadratic transform; bottom-right: different translation per color channel.

Of course, there is almost no chance that the computed coordinate will be a whole number, which means that the destination pixel will come from a place "in between" several pixels in the source image. To determine the actual value, the software will use an interpolation function, which will estimate the intermediate value based on more or less neighbors depending on the interpolation method.

If the transition between the pixels is regular enough (in regard to the interpolation method), the recreated value will be very close to the actual value there. Of course, with extreme transforms where the formula determines that a lot of pixels of the destination come from the same interval of pixels in the source image, the algorithm will not have enough sampling points to recreate a pertinent value and the destination will look smoothed, which is probably what you call a decrease of resolution (there are evenly distributed new pixels, but their values are determined by less sensor samples). You can compare the areas of the source and destination images to have an idea of how the density of samples is distributed, although the actual resolution increment or decrement will also depend on the final size of the destination image. I hope this answers you questions, because I am not sure how it should be understood.

Here is how the image is deformed with the keystone correction in left/right direction.
Hoping that the image is not compressed by the mailing system, you can zoom on the image.
keystone.jpg
With small angles, the deformation is quite minimal so that we do not have to worry about a visual degradation (but it is enough to get improvement in stereo comfort)
Even with larger angles, used when preparing phantograms for example, the resulting image generally looks good. This trick to process the images work really well.

JackDesBwa


Back At The Golf Course Ready To Chat

 

I'm back at the golf course ready to chat, kids!

https://youtu.be/WB9yDrpYN7Q


Re: New 3D lens announced

Oktay
 

On Thu, Apr 8, 2021 at 03:25 AM, Antonio F.G. wrote:
>>I agree these mirror lenses are very effective head-ache makers if sold without correcting software.<<
I have very little computing skills, so I have to ask a question about these correcting softwares:

Does the resolution of the right side image gradually decrease from the right edge of the image to the left edge of the image when correcting keystone distortions? (Same question for the left side image of course)
Or is the resolution or the number of pixels distributed homogenously all over the image area?

Oktay


Re: Ingenuity on Mars in 3D #stereopix

JackDesBwa|3D
 

Here is my updated phantogram: https://stereopix.net/photo:koUNd1puWc/

JackDesBwa


Re: Ingenuity on Mars in 3D #stereopix

KenK
 

Yes! And the image on Mission sol 45 is a good example of the benefit of the stereopix viewer. You can "have it your way" (anaglyph vs SBS vs etc...).
https://mars.stereopix.net/


Re: Ingenuity on Mars in 3D #stereopix

JackDesBwa|3D
 

The MastCam-Z photographed it too on SOL45 (arrived in the public repository in the meanwhile), better framed and with a smaller base.
I will probably redo the phantogram with this shot and delete the one I finished a few hours ago.

By the way, this picture was also used in today's APOD: https://apod.nasa.gov/apod/ap210408.html

JackDesBwa


Re: New 3D lens announced

Antonio F.G.
 
Edited

On Wed, Apr 7, 2021 at 02:58 PM, Depthcam wrote:
average people using these converters do not correct them and assume that the discomfort is just due to the "3D effect".  This is the result of companies putting devices on the market that have inherent optical design flaws and not warning their buyers about them.
I agree these mirror lenses are very effective head-ache makers if sold without correcting software.
But the correcting software might be something very simple for mirror arrangements like the Kúla, because the keystone angles of the device are hopefully constant for every unit, so the corrections could be fixed as well. The software does not need the hassle of SPM or StMani3. The user options could be limited to cropping the margins and select the output format (anaglyph, sbs, mpo...)

You worry that users often neglect to process the images even if they have the application to do it. This may be solved with mirror devices like the tri-def, because the effect of the mirrors make impossible to view directly those images. The user would be forced to process them with the software provided by the manufacturer.


It doesn't matter whether you use a cheap plastic camera or a high-end DSLR, the results are exactly the same as far as the shortcomings of such converters.  Even if you apply strong corrections, these will still require extensive cropping - not only to correct the keystone, but also the blending of the left and right image at the center.  You also have to take into account that these converters can only be mounted to long lenses, which means that the already narrow FOV is split into two and the cropping needed results in an even narrower view.  When used on shorter lenses, it causes vignetting.  In fact there are traces of vignetting at the top of the right image in one of your samples.  This would require further cropping.
True everything. But you surely know that stereo photography is a world of difficult trade-off's: synch, stereo base, photo quality, weight, volume, et al. The mirror lenses offer perfect synch and the possibility to use a high-end camera, of course loosing other things in the way.

Regards
   Antonio


Re: New photo registration software #workFlow #software #softwareDev #theory

Yitzhak Weissman
 

I believe that fCarta can be used for phantograms, although I did not test it in this application.
Yes, you can put the reference marks on the frame, but you could also use the frame corners themselves as reference points.
 
Itsik


Re: Ingenuity on Mars in 3D #stereopix

JackDesBwa|3D
 
Edited

The same day, the navcam photographed it with a longer focal, but it was split in 4 images.
I tried to assemble them and create a phantogram from it. The shooting conditions were not ideal for it, so that the helicopter is very stretched in height, but I published it nonetheless.
You might want to look at it from a lower position than usual to get better proportions.
[Removed link, now broken]
 
JackDesBwa


Re: New 3D lens announced

JackDesBwa|3D
 

Antonio and Jack thought they were very smart by showing that they "can" correct the distortions...

Not fair.
- Antonio reacted to the too common myth that keystone cannot be corrected, probably because your message could be understood as a reinforcement of that misunderstanding.
- In my message, I corrected the keystone effect so that it was easier to see the excessive base, because you denied it.
Nothing arrogant about it.
 
But my point is that those two pictures were published "as is" on social media as "great pictures taken with the Kula Deeper".

By the way, such mirror design could produce parallel shots if the angle of the mirrors were adjusted for that.
But then published "as-is" on social media would give bad stereoscopic photos anyway, because of window violations for example.

With the same weak argument, you could conclude that parallel dual cameras are bad devices because published as-is the images are likely badly aligned, with bad window placement, with window violation, possibly with lens distortion, color mismatch....
You could also conclude that all lenses that exist are bad, because they introduce distortions, and that [mixed with another weak argument you gave] the fact that the cameras or computer software correct them is not a reason to use such lenses in the first place.
In a more general way, the fact that some people cannot use a tool (especially without training) is not a sign that the tool is bad per se.

That said, I would not encourage to use such a device myself.

JackDesBwa

1321 - 1340 of 130572