Date   

Re: New 3D lens announced

Antonio F.G.
 

On Fri, Apr 9, 2021 at 03:09 PM, Depthcam wrote:
 I remain unconvinced that all photos taken with converging optical axes can be corrected. 
What I say (and can prove) is that any optical convergence can be corrected to null the VERTICAL disparity.



The very simple reason for this is that if the convergence is at a subject at close range, the image recorded at far range may be completely different  due to the axes pointing at different parts of the scene that do not match.
You are talking now of HORIZONTAL disparity. Sure, if the horizontal disparity were much higher than the 1/30th rule, the pair would be un-viewable, regardless the vertical alignment. And this part can NOT be corrected, at least using simple perspective transforms. 



My point is that if a product with inherent design flaws is put on the market, the buyers will mostly use them "as is" and that will result in images that cause eyestrain.
I solemnly promise NEVER to put a mirror lens in the market:-)
But I would eagerly buy the Kúla Deeper if it were not discontinued. It is because I would like to give a stereo use to my Fuji X-M1 which is much much better camera than the NX1000's of my present rig.

Regards
     Antonio


Re: New 3D lens announced

Depthcam
 

> - Antonio reacted to the too common myth that keystone cannot be corrected


 I remain unconvinced that all photos taken with converging optical axes can be corrected.  The very simple reason for this is that if the convergence is at a subject at close range, the image recorded at far range may be completely different  due to the axes pointing at different parts of the scene that do not match.

Also, I'd like to remind all that the subject of this thread is commercially available 3D lenses - not whether it is technically possible to correct distortions caused by poorly designed accessories.

My point is that if a product with inherent design flaws is put on the market, the buyers will mostly use them "as is" and that will result in images that cause eyestrain.


> By the way, such mirror design could produce parallel shots if the angle of the mirrors were adjusted for that.


Actually, no.  This will only work if a  mirror adapter is mounted on a set of two lenses  - as is the case with the Leiz Stemar or the Zeiss Stereotar C.  When you mount a mirror adapter on a single lens and keep the optical axes parallel, each side is viewing one side of the scene - just as if there were no adapter. In order for a left and right image to be recorded of the same scene, the optical axes MUST be converged.


> With the same weak argument, you could conclude that parallel dual cameras are bad devices because published as-is the images are likely badly aligned, with bad window placement, with window violation, possibly with lens distortion, color mismatch...


a) Sorry, but the argument is not weak.  It is the result of viewing decades of distorted eye-straining images created with such devices - that are marketed as "a simple way to get stunning 3D images".  Also keep in mind that, for the very many decades when those devices were widely marketed, there was no way to correct for the inherent opposite keystone distortion.

b) Do not confuse home-made stereo rigs with commercially available products.  Slight vertical misalignment can occur with commercially produced stereo cameras but it seldom causes the strong eye-strain that single-lens SBS 3D converters produce by design.


> You could also conclude that all lenses that exist are bad, because they introduce distortions


You seem to be missing the point that the cause of eyestrain in this particular case is OPPOSITE keystone distortion. It is the mismatch that causes the eyestrain - not the distortion itself.  If you take a picture of a building and point your camera up, you will also get keystone distortion but it will be the same in both the left and right images - therefore, comfortable to view.


>
the fact that the cameras or computer software correct them is not a reason to use such lenses in the first place.


Again, you are missing the point that converters such as the Kula Deeper and all its predecessors are marketed as devices that produce "perfect 3D out of the box".  The fact that stereo enthusiasts may recognize the inherent distortions they cause and be able to correct some of the distortions they produce is not very relevant because stereo enthusiasts are a minority.  Those adapters are marketed to average users that, for the most part, know nothing about 3D.  As I pointed out before, pictures taken with the Kula Deeper show up on social media "as is" - with no correction - and they are eye-straining to view.  Even Kula posted uncorrected eye-straining images on their website as examples of the "good 3D" their device produces.
 
If stereo products are to be commercially marketed, they should be designed in such a way that they produce pleasant results even for people that have no knowledge or understanding of 3D.

Francois


Re: Photographer and Designer Builds 3D Printed Stereoscopic ‘Wiggle Lens’

Depthcam
 

Michael already posted a link about this in another thread.  However, neither in his link nor on the photographer's site was it mentioned that the APS-C version only has two lenses - making it essentially a homemade version of the Lumix 3D lens !

For an acceptable wiggle, it's best to have at least three lenses and even then, when fitting three lenses onto a single lens mount, the interaxial ends up pretty small. Therefore the effect only works well at close range.  And even then, the images end up pretty narrow.

But the two-lens version for APS-C, well, I think i'd choose an original Lumix 3D lens over a 3D printed homemade one - even though his lenses might be set a bit wider apart to accommodate the slightly larger sensor (The Lumix lens is optimized for an M43 sensor.)

For the three-lens model, one first needs to get a full-frame DSLR...

I think I'll pass.

But you gotta admit the three-lens model does look pretty groovy !

Francois


Re: New 3D lens announced

JackDesBwa|3D
 

I was thinking that the keystone correcting software was working similar to the perspective control software used to correct the extreme perspective in photographs taken with wide angle lenses with inclined angles with respect to the to the surface of the object.

The phenomenon is exactly the same: a projection on a plan that is not parallel to the subject [successive depth planes in case of stereo].
The ideal lens is also the same: a shift lens to keep the sensor parallel while getting rays coming from an angled direction.
In modeling software, such shift lens is used for the stereo cameras, because it is cheap to build in software contrary to the real world one and allows to set the base and window independently without requiring post-treatment.

JackDesBwa


Re: New 3D lens announced

Oktay
 

Thanks for the comprehensive explanation.

I was thinking that the keystone correcting software was working similar to the perspective control software used to correct the extreme perspective in photographs taken with wide angle lenses with inclined angles with respect to the to the surface of the object.

Oktay


Re: New 3D lens announced

JackDesBwa|3D
 


Does the resolution of the right side image gradually decrease from the right edge of the image to the left edge of the image when correcting keystone distortions? (Same question for the left side image of course)
Or is the resolution or the number of pixels distributed homogenously all over the image area?

The general principle is that the software use a mathematical formula to associate a coordinate in the source image to each pixel of the destination image. For the keystone correction, it is a simple affine transform, but it could be a more complex formula to correct lens distortion for example (or a combination of lens & keystone distortions, and so on...). It could even be a different formula per color channel, for example to correct chromatic aberrations.

transform.jpg
Examples of transforms with this method: Top-left: original; Top-right: linear transform (3×3 matrix); bottom-left: quadratic transform; bottom-right: different translation per color channel.

Of course, there is almost no chance that the computed coordinate will be a whole number, which means that the destination pixel will come from a place "in between" several pixels in the source image. To determine the actual value, the software will use an interpolation function, which will estimate the intermediate value based on more or less neighbors depending on the interpolation method.

If the transition between the pixels is regular enough (in regard to the interpolation method), the recreated value will be very close to the actual value there. Of course, with extreme transforms where the formula determines that a lot of pixels of the destination come from the same interval of pixels in the source image, the algorithm will not have enough sampling points to recreate a pertinent value and the destination will look smoothed, which is probably what you call a decrease of resolution (there are evenly distributed new pixels, but their values are determined by less sensor samples). You can compare the areas of the source and destination images to have an idea of how the density of samples is distributed, although the actual resolution increment or decrement will also depend on the final size of the destination image. I hope this answers you questions, because I am not sure how it should be understood.

Here is how the image is deformed with the keystone correction in left/right direction.
Hoping that the image is not compressed by the mailing system, you can zoom on the image.
keystone.jpg
With small angles, the deformation is quite minimal so that we do not have to worry about a visual degradation (but it is enough to get improvement in stereo comfort)
Even with larger angles, used when preparing phantograms for example, the resulting image generally looks good. This trick to process the images work really well.

JackDesBwa


Back At The Golf Course Ready To Chat

 

I'm back at the golf course ready to chat, kids!

https://youtu.be/WB9yDrpYN7Q


Re: New 3D lens announced

Oktay
 

On Thu, Apr 8, 2021 at 03:25 AM, Antonio F.G. wrote:
>>I agree these mirror lenses are very effective head-ache makers if sold without correcting software.<<
I have very little computing skills, so I have to ask a question about these correcting softwares:

Does the resolution of the right side image gradually decrease from the right edge of the image to the left edge of the image when correcting keystone distortions? (Same question for the left side image of course)
Or is the resolution or the number of pixels distributed homogenously all over the image area?

Oktay


Re: Ingenuity on Mars in 3D #stereopix

JackDesBwa|3D
 

Here is my updated phantogram: https://stereopix.net/photo:koUNd1puWc/

JackDesBwa


Re: Ingenuity on Mars in 3D #stereopix

KenK
 

Yes! And the image on Mission sol 45 is a good example of the benefit of the stereopix viewer. You can "have it your way" (anaglyph vs SBS vs etc...).
https://mars.stereopix.net/


Re: Ingenuity on Mars in 3D #stereopix

JackDesBwa|3D
 

The MastCam-Z photographed it too on SOL45 (arrived in the public repository in the meanwhile), better framed and with a smaller base.
I will probably redo the phantogram with this shot and delete the one I finished a few hours ago.

By the way, this picture was also used in today's APOD: https://apod.nasa.gov/apod/ap210408.html

JackDesBwa


Re: New 3D lens announced

Antonio F.G.
 
Edited

On Wed, Apr 7, 2021 at 02:58 PM, Depthcam wrote:
average people using these converters do not correct them and assume that the discomfort is just due to the "3D effect".  This is the result of companies putting devices on the market that have inherent optical design flaws and not warning their buyers about them.
I agree these mirror lenses are very effective head-ache makers if sold without correcting software.
But the correcting software might be something very simple for mirror arrangements like the Kúla, because the keystone angles of the device are hopefully constant for every unit, so the corrections could be fixed as well. The software does not need the hassle of SPM or StMani3. The user options could be limited to cropping the margins and select the output format (anaglyph, sbs, mpo...)

You worry that users often neglect to process the images even if they have the application to do it. This may be solved with mirror devices like the tri-def, because the effect of the mirrors make impossible to view directly those images. The user would be forced to process them with the software provided by the manufacturer.


It doesn't matter whether you use a cheap plastic camera or a high-end DSLR, the results are exactly the same as far as the shortcomings of such converters.  Even if you apply strong corrections, these will still require extensive cropping - not only to correct the keystone, but also the blending of the left and right image at the center.  You also have to take into account that these converters can only be mounted to long lenses, which means that the already narrow FOV is split into two and the cropping needed results in an even narrower view.  When used on shorter lenses, it causes vignetting.  In fact there are traces of vignetting at the top of the right image in one of your samples.  This would require further cropping.
True everything. But you surely know that stereo photography is a world of difficult trade-off's: synch, stereo base, photo quality, weight, volume, et al. The mirror lenses offer perfect synch and the possibility to use a high-end camera, of course loosing other things in the way.

Regards
   Antonio


Re: New photo registration software #workFlow #software #softwareDev #theory

Yitzhak Weissman
 

I believe that fCarta can be used for phantograms, although I did not test it in this application.
Yes, you can put the reference marks on the frame, but you could also use the frame corners themselves as reference points.
 
Itsik


Re: Ingenuity on Mars in 3D #stereopix

JackDesBwa|3D
 
Edited

The same day, the navcam photographed it with a longer focal, but it was split in 4 images.
I tried to assemble them and create a phantogram from it. The shooting conditions were not ideal for it, so that the helicopter is very stretched in height, but I published it nonetheless.
You might want to look at it from a lower position than usual to get better proportions.
[Removed link, now broken]
 
JackDesBwa


Re: New 3D lens announced

JackDesBwa|3D
 

Antonio and Jack thought they were very smart by showing that they "can" correct the distortions...

Not fair.
- Antonio reacted to the too common myth that keystone cannot be corrected, probably because your message could be understood as a reinforcement of that misunderstanding.
- In my message, I corrected the keystone effect so that it was easier to see the excessive base, because you denied it.
Nothing arrogant about it.
 
But my point is that those two pictures were published "as is" on social media as "great pictures taken with the Kula Deeper".

By the way, such mirror design could produce parallel shots if the angle of the mirrors were adjusted for that.
But then published "as-is" on social media would give bad stereoscopic photos anyway, because of window violations for example.

With the same weak argument, you could conclude that parallel dual cameras are bad devices because published as-is the images are likely badly aligned, with bad window placement, with window violation, possibly with lens distortion, color mismatch....
You could also conclude that all lenses that exist are bad, because they introduce distortions, and that [mixed with another weak argument you gave] the fact that the cameras or computer software correct them is not a reason to use such lenses in the first place.
In a more general way, the fact that some people cannot use a tool (especially without training) is not a sign that the tool is bad per se.

That said, I would not encourage to use such a device myself.

JackDesBwa


Using The Marco Polo App At The Golf Course

 

Here's another good way to communicate inside VR:
 


Photographer and Designer Builds 3D Printed Stereoscopic ‘Wiggle Lens’

turbguy
 


Petition to release 3d Movies Petition to release and restore 3d Movies from Warner by 3d Film Archive

Philip Heggie
 

Petition to release and restore 3d Movies Petition to release 3d Movies from Warner by 3d Film Archive

https://www.change.org/p/warner-brothers-restore-and-release-warner-brothers-3d-classics-with-3-d-film-archive

 

 

 

Sent from Mail for Windows 10

 


Re: New 3D lens announced

robert mcafee
 

Some time ago there was a Poppy device which had a 3D adapter like these for the iPhone and a viewer. When introduced there was an app for the phone which I believe corrected (to the extent possible) the images. 

I bought one of these very cheap ($8) some years after the product was discontinued and found, unfortunately, that the app was no longer available and the company disbanded (I think). So I have a $8 viewer for side by side images on some iPhones (the ones that still physically fit in it). 

Would be nice to have a correction transform to try it out




On Wednesday, April 7, 2021, 3:58 PM, Depthcam via groups.io <depthcam@...> wrote:

> I suggest nothing, I do NOT own the Kula deeper


I was responding to Antonio but also, generally, to people who still appear to be unaware that such mirror converters - when mounted on single lenses - cause strong opposite keystoning due to the converged optical path.


> It is funny to see many people using fisheye devices to make 3D


This is a very different situation because cameras with fish-eye lenses are designed for shooting VR180 content - not to revert the results to a cropped restricted FOV image.  Some stereo enthusiasts do this because the only currently available 3D cameras are designed for VR180 viewing.  In other words, the fish-eye distortion of the lenses in those cameras is a requirement of VR180 viewing - not the result of a poorly designed optical system.  Therefore, using such cameras for regular 3D is a compromise and does again require extensive cropping of the original image.


> as  opposed to the Kula where you just have to press Alt - A


But the problem here is that Kula and all its predecessors are designed for a public that does not press "A" !  They then get eyestrain from the views but do not recognize what causes it and think that it is inherent to 3D viewing. The best demonstration of that are the two portraits I just posted.  Antonio and Jack thought they were very smart by showing that they "can" correct the distortions...  But my point is that those two pictures were published "as is" on social media as "great pictures taken with the Kula Deeper".  Therefore people viewing these are experiencing the optical distortion caused by the device but do not know what causes it.  So they equate eye discomfort with 3D viewing.

Remember again that, as much as some here may argue that "any picture can be corrected", average people using these converters do not correct them and assume that the discomfort is just due to the "3D effect".  This is the result of companies putting devices on the market that have inherent optical design flaws and not warning their buyers about them.


> One advantage of the Kula deeper is that you can use a DSLR or hybrid camera and have an excellent image quality


It doesn't matter whether you use a cheap plastic camera or a high-end DSLR, the results are exactly the same as far as the shortcomings of such converters.  Even if you apply strong corrections, these will still require extensive cropping - not only to correct the keystone, but also the blending of the left and right image at the center.  You also have to take into account that these converters can only be mounted to long lenses, which means that the already narrow FOV is split into two and the cropping needed results in an even narrower view.  When used on shorter lenses, it causes vignetting.  In fact there are traces of vignetting at the top of the right image in one of your samples.  This would require further cropping.


> The bad reputation of 3D certainly comes more from movies, seen by million of people, rather than a few stereo images taken with beamspiltters by amateurs and seen by nobody.


These 3D converters (they are not "beam splitters") have been around since at least 1914 and people using them definitely show those pictures to friends and family and also publish them.  Nowadays they do appear on social media.  So they certainly do impact the reputation of 3D.

Another problem with the Kula is its size.  It results in an effective interaxial of around 100mm, yet users often shoot close range subjects with it, resulting in perspective distortion - as can be plainly observed in the portrait samples I showed.

But the one problem that is hardest to correct is internal reflections.  This is actually a consequence of the mirror design of any such converters whether they are mounted on a single lens or a pair of lenses.  I have seen signs of internal reflections not only with the Kula Deeper but also with some of the Cyclopital 3D base extenders.  On one of your sample pictures, signs of internal reflections are present.

For all the above reasons, I recommend people stay away from such devices.

Francois


Re: New 3D lens announced

Depthcam
 

> I suggest nothing, I do NOT own the Kula deeper


I was responding to Antonio but also, generally, to people who still appear to be unaware that such mirror converters - when mounted on single lenses - cause strong opposite keystoning due to the converged optical path.


> It is funny to see many people using fisheye devices to make 3D


This is a very different situation because cameras with fish-eye lenses are designed for shooting VR180 content - not to revert the results to a cropped restricted FOV image.  Some stereo enthusiasts do this because the only currently available 3D cameras are designed for VR180 viewing.  In other words, the fish-eye distortion of the lenses in those cameras is a requirement of VR180 viewing - not the result of a poorly designed optical system.  Therefore, using such cameras for regular 3D is a compromise and does again require extensive cropping of the original image.


> as  opposed to the Kula where you just have to press Alt - A


But the problem here is that Kula and all its predecessors are designed for a public that does not press "A" !  They then get eyestrain from the views but do not recognize what causes it and think that it is inherent to 3D viewing. The best demonstration of that are the two portraits I just posted.  Antonio and Jack thought they were very smart by showing that they "can" correct the distortions...  But my point is that those two pictures were published "as is" on social media as "great pictures taken with the Kula Deeper".  Therefore people viewing these are experiencing the optical distortion caused by the device but do not know what causes it.  So they equate eye discomfort with 3D viewing.

Remember again that, as much as some here may argue that "any picture can be corrected", average people using these converters do not correct them and assume that the discomfort is just due to the "3D effect".  This is the result of companies putting devices on the market that have inherent optical design flaws and not warning their buyers about them.


> One advantage of the Kula deeper is that you can use a DSLR or hybrid camera and have an excellent image quality


It doesn't matter whether you use a cheap plastic camera or a high-end DSLR, the results are exactly the same as far as the shortcomings of such converters.  Even if you apply strong corrections, these will still require extensive cropping - not only to correct the keystone, but also the blending of the left and right image at the center.  You also have to take into account that these converters can only be mounted to long lenses, which means that the already narrow FOV is split into two and the cropping needed results in an even narrower view.  When used on shorter lenses, it causes vignetting.  In fact there are traces of vignetting at the top of the right image in one of your samples.  This would require further cropping.


> The bad reputation of 3D certainly comes more from movies, seen by million of people, rather than a few stereo images taken with beamspiltters by amateurs and seen by nobody.


These 3D converters (they are not "beam splitters") have been around since at least 1914 and people using them definitely show those pictures to friends and family and also publish them.  Nowadays they do appear on social media.  So they certainly do impact the reputation of 3D.

Another problem with the Kula is its size.  It results in an effective interaxial of around 100mm, yet users often shoot close range subjects with it, resulting in perspective distortion - as can be plainly observed in the portrait samples I showed.

But the one problem that is hardest to correct is internal reflections.  This is actually a consequence of the mirror design of any such converters whether they are mounted on a single lens or a pair of lenses.  I have seen signs of internal reflections not only with the Kula Deeper but also with some of the Cyclopital 3D base extenders.  On one of your sample pictures, signs of internal reflections are present.

For all the above reasons, I recommend people stay away from such devices.

Francois

1261 - 1280 of 130507