Date   

Re: How To Publish Photos On Stereopix.net

Stereopix Net
 

Is there a way to click on Slideshow when viewing images

Not yet.
Added in todo list.

JackDesBwa


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

John Clement
 

I do not see any references to “particle wave duality”.  It is all conventional optics, very fancy yes, but optics.  They are using diffraction gratings, which is wave optics.  So how does the particle wave duality come in?

 

I do see the term photonic, but the papers which use it are just using the wave model for light.  I do not see any references to the quantum properties of light.  Please point out where the quantum properties come into the paper.  So they use the term momentum, but that does not imply quanta, as waves carry momentum and energy.  Now of course the quantum nature of light comes in when producing it using LEDs.  Actually the quantum nature of light comes in to how incandescent lights work.  Plank’s hypothesis of quantization solved the black body radiation problem.  But once the light is produced, the rest is fancy optics.  The advantage of what they are doing is that they can produce a wide angle multiple image display at very high efficiency.

 

John M. Clement

 

From: main@Photo-3d.groups.io <main@Photo-3d.groups.io> On Behalf Of Nima
Sent: Tuesday, February 4, 2020 9:54 PM
To: main@Photo-3d.groups.io
Subject: Re: [Photo-3d] The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

 

The particle wave duality has nothing to do with it.  That is gobbledygook that is likely to appear in an ad. Written to impress the reader.

Or, more plausibly: maybe you just don't know very much about the subject?

Would love to see your peer-reviewed paper that debunks this: https://www.researchgate.net/publication/236070530_A_multi-directional_backlight_for_a_wide-angle_glasses-free_3D_display


Re: Rokit Phone limitations

John Clement
 

Math and science often come up with things that seem unintuitive, but what I said is mathematically correct.  Of course when you start with a good source you end up with a good product, but increasing the pixels by a ratio of 4 requires the same algorithms which will produce the same artifacts.  As to perception, the artifacts will be smaller, so they may not be as perceptible.  If going from 2K to 4K produces 100 artifacts, going from 4k to 8k will produce 400 artifacts, but they will be smaller.  If the initial data is more accurate then there will be the possibility of more improvement.  That is how scientific experiments work.  Random variations (called errors) multiply as you do calculations, and the artifacts are the resulting “errors”.  More data, more calculations, more errors. 

 

The newer algorithms are probably doing some fancy things to smooth out the artifacts, and are using better calculations.  The same methods suitably scaled will give similar improvement with the less data.  However, notice that this is with all other things equal.  If you have 10bit color rather than 8 bit, that is more accurate data.  You can trade the accurate color for higher accuracy in upscaling, but more noise in the color.  The upscaling accuracy may be more noticeable than the color jitter.

 

As to my suggestion of Blu-Ray, most upscaling will be done for broadcasts, web based videos, and Blu-Rays.  So if you have a set with poor 2k to 4k upscaling, it might be possible to get at a cheaper better results by getting a better Blu-ray player.  This will at least fix the web and Blu-ray videos.  It probably would be cheaper than buying a new TV.

 

John M. Clement

--------------------------------------------------------------------------------------------------------------------------------------

 

The ratio of information needed to information you have is the same for 2 to 4 as 4 to 8.  Mathematically you should get exactly the same improvement.

That doesn't make any sense?  The ratio between them is irrelevant here, you're ignoring that you have 4x more input data to work with from 4K to 8K vs. 1080p to 4K.  If you were to say the ratio were the same from 1080p to 4K vs. 1080p to 8K I might agree with you...if the algorithms and chipsets were the same, which they aren't.

If you have an existing set that has poor upscaling, it may be possible to get a Blu-Ray box that gives you much improvement at a modest cost.

Are you implying that consumers should run their cable and game systems through their Blu-Ray player?  The vast majority of Blu-Ray players don't even have HDMI input.  No, in my opinion if it's not built-in and done automatically by default by the TV it won't be adopted in any meaningful way.  Just like if there was a magical box that automatically converted monoscopic content to 3D, you wouldn't see consumers clamoring at Best Buy shelves trying to buy them.  If 3D is to succeed, it must be good, easy, and provide no major downsides compared to the pre-existing technology. 


Re: Rokit Phone limitations

turbguy
 

On Tue, Feb 4, 2020 at 11:46 PM, Nima wrote:
That still doesn't make sense.  
I could not care about "making sense", because, IMO, it's really true!  It 's perceptual.

Compare the output of a 8 MP cam to that of a 11 MP cam (one "stop"), a really slight difference at "normal" viewing sizes, but not dramatic.  Go to a 16MP cam (two "stops"), EASILY noticeable.

BUT, for displays, you eventually reach a point of diminishing returns due to the perceptual resolution of the normal human eye (particularly with SMALL displays held at typical viewing distances).

You do not hold a phone within 6" of your face, or closer (without some assistance/glasses) to be able to better perceive the additional resolution available when the resolution falls below a certain small (about a couple arc seconds or so) of angle at typical viewing distances.

That said, an 8K display on a phone, while probably possible in the near future, would have significant "diminishing returns" for 2D display. 

BUT, for 3D display (light field/lenticular/whatever)?    That's might be a great thing!

Wayne


Re: Rokit Phone limitations

Nima
 

The term "virtual reality" was coined by Morton Heilig back in the 1940s.  He also defined it correctly, since VR was and is simulated reality, based on the human experience.   As Mort put it, 360 degree systems by themselves are not and cannot be virtual reality unless they are also stereoscopic.  They require the observer to continually spin around, an activity that is very inconvenient for most people.  About 230 degrees can be attained stereoscopically with a two-lens camera, and solves many problems.  

Can you provide a source that he coined it and where those quotes come from JR?  I don't believe he ever used the term virtual reality or called Sensorama virtual reality.  By most accounts, the term was coined and popularized by Jaron Lanier.

 


Re: Rokit Phone limitations

John Rupkalvis
 

The term "virtual reality" was coined by Morton Heilig back in the 1940s.  He also defined it correctly, since VR was and is simulated reality, based on the human experience.   As Mort put it, 360 degree systems by themselves are not and cannot be virtual reality unless they are also stereoscopic.  They require the observer to continually spin around, an activity that is very inconvenient for most people.  About 230 degrees can be attained stereoscopically with a two-lens camera, and solves many problems.  

John A. Rupkalvis
stereoscope3d@...

Picture


On Tue, Feb 4, 2020 at 1:45 PM Nima <contactnimaz@...> wrote:
There is no such thing as 2DVR.  Virtual reality is the simulation of reality.  People have two eyes, but they do not have eyes in the back of their head.  It is not necessary to have a 360 degree FOV to be called VR, but it is necessary that VR must be stereoscopic.  360 degree 2D images are just surround video or surround photography, not VR.    
It very much depends on where your nomenclature and taxonomy of defining VR comes from.  If you search Google for the definition of VR, Google's definition states that VR requires a computer simulated space, and thus stereoscopic 180 and 360 photos and videos are ALSO not VR.  I generally tend to agree.  But not being true VR =/= not valuable in it's own right.


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

Nima
 

That's what auto-aligment software (a form of computational photography) is for.  EASY to implement.
That only works for aligning 3D photos that 3D enthusiasts are viewing.  It doesn't work on videos, doesn't work on any software looking to do any good realtime computational photography(not have to sit and wait for a few seconds at a desktop computer to process the alignment).  Auto-alignment also ends up throwing away pixel rows at the edges of the image.  Even cameras like the FujiFilm W3 have to be aligned at the factory for the best quality, and Hydrogen did as well.  In addition, it has a simple ML algorithm that learns and improves the calibration based on the first ~20 photos taken with the device.  
Also, I thought a cell phone WAS reasonably solid, just like a W3 (which STILL needs software alignment, anyway).

You'd be surprised.  What you and I consider rigid in the real world is different from what a tiny sensor needs to give you the highest quality image.

As for the other reasons, I really hoped that RED would produce something that would actually understand the underlying basics of stereophotography, and produce a sane product, rather than follow the past products.

...but instead, they did not break out of the design mantra of others...pity...and relied on computation to substitute for the real thing...

This is true, but let's be clear: they were never trying to sell a smartphone for 3D enthusiasts.  That would have come later with the RED Lithium 3D cinema camera if it ever shipped.  They wanted to make a consumer smartphone that people who didn't care about 3D would buy, and would become a best selling smartphone.  They achieved the first part, as most Holopix users have never owned or used a 3D device before Hydrogen.  Of course, they fell flat on their face when it comes to the second goal.

I'm always surprised by this: the 3D community seems to know exactly what they want, and companies do tend to fail at building things that appeal to us.  Why haven't there been any community projects to build the products we want?  StereoPi and David Cano's new 4K 3D displays seem to be completely ignored by the 3D community, despite trying to deliver products with the specs that we're asking for.  It's almost like 3D fans don't want to put their money where their mouth is...


Re: Rokit Phone limitations

Nima
 
Edited

He's right, perceptually.  You need to jump up 2 stops in resolution (4 times the data) to perceive a significant difference.  I can't give sources, but that's photographic mantra...

That still doesn't make sense.  There are diminishing returns.  4x as much data =/= twice as good an image.  

Here are some links discussing it.  For example, the iPhone 3GS to the iPhone 4, which was a 4x resolution jump, was MASSIVE and made the phone display look much better!  More than twice as good in my and many other people's opinion!  But would a display with 4x the pixel density of the iPhone 4 look twice as good?  No, it would only look a bit better, maybe.  If you can even notice a difference.

This is even something you can test yourself.  Get a 1440p and a 4K monitor and put them next to each other.  Run the 1440p monitor at 720p and switch back and forth between 720p and 1440p at a normal viewing distance.  Then run the 4K monitor at 1080p and switch back and forth between 1080p and 4K.  Which seems like the larger jump?  I think the vast majority of people would say 720p to 1440p is a much larger visual difference than 1080p and 4K at normal viewing distances.  Both 4x more than the other, but human visual perception is not linear!


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

turbguy
 

"It's actually VERY hard to calibrate and align camera sensors, even more so if they're not mounted to a single rigid object.  The further away they are from each other the more difficult this is."

That's what auto-aligment software (a form of computational photography) is for.  EASY to implement.  Also, I thought a cell phone WAS reasonably solid, just like a W3 (which STILL needs software alignment, anyway).

As for the other reasons, I really hoped that RED would produce something that would actually understand the underlying basics of stereophotography, and produce a sane product, rather than follow the past products.

...but instead, they did not break out of the design mantra of others...pity...and relied on computation to substitute for the real thing...

Wayne


Re: Rokit Phone limitations

turbguy
 

"The ratio of information needed to information you have is the same for 2 to 4 as 4 to 8.  Mathematically you should get exactly the same improvement"

He's right, perceptually.  You need to jump up 2 stops in resolution (4 times the data) to perceive a significant difference.  I can't give sources, but that's photographic mantra...

Wayne


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

Nima
 

Interesting, I suspected there were many interior views generated.  Thanx!
Actually, in Lightfield Animations, it's almost all exterior views.
Does the Hydrogen phone display the artifacts as well in animation?

On the Hydrogen, there's no animation, it's just a lightfield/3D photo.  Lightfield Animations are a way to share 3D images with others who don't have a 3D device to view them with.

It appears the system synthisizes more deviation than existed in the original source.  IMO. it would have been more appropriate for the source to start with a larger interaxial to begin with...

I totally agree.  Hopefully that will be the smallest camera baseline in the history of 3D!

Actually, it makes more stereographic "sense" to me to provide a camera array with 3,4, (or even more) lenses, with some having wider separation (even beyond that of the HTC EVO).

Correct.  But you'd be surprised by what might be ideal.  Apple's tri-camera solution is surprisingly great for good 3D capture.  The only thing you may want to add besides different sensor types is also DIFFERENT baselines between the different cameras.  E.g. perpendicular cameras in an L shape where all the sensors are three different distances away from each other, unlike iPhone which is equidistant.

Cell phone bodies are easily large enough to do this.  WHY are phone manufactures (including Hydrogen) building phones with such small interaxials??

The answer may surprise you, but it's trifold: 

  1. It's actually VERY hard to calibrate and align camera sensors, even more so if they're not mounted to a single rigid object.  The further away they are from each other the more difficult this is.
  2. Latency/data bandwidth to the SOC's DSP: though it's not an insurmountable problem, there is a non-zero amount of time to travel over wires from a remote sensor to a chipset's DSP.  This increases complexity.
  3. Smartphone design: smartphone design is relatively rigid and many Android phones look relatively the same on the inside.  Most designs are based on pre-existing designs, even for motherboards.  Batteries need to all be in one place, as thickness = capacity.  Adding electronics on two sides of a device is actually very hard.  Kudos to HTC and LG for making it happen almost a decade ago!  
Finally, the 3D market is tiny, of course.  Even for devices that support 3D, almost everyone wants to optimize for 2D use-cases instead of increasing costs significantly to solve the above 3 problems.  I would love for a company to make a 3D camera in the 2020's optimized for this one specific job.
I beleive the close interaxial of the Hydrogen phone was a major disappointment to almost everyone in this list.
I still adore the photos and videos I've shot on Hydrogen, but I can understand why it was disappointing.  It does take fantastic 3D photos and videos, especially if your subject is nearby and you view the content on the lightfield display.  But you should definitely try a more serious rig if you want to take amazing pictures, Hydrogen is a consumer product, despite RED pitching it as a professional tool.


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

turbguy
 

"The link above is what we call a "Lightfield Animation", and is actually 20 views generated from 2 or 4 images.  The further you go from the initial views, the higher of a chance there is for artifacts".

Interesting, I suspected there were many interior views generated.  Thanx!

Does the Hydrogen phone display the artifacts as well in animation?  It appears the system synthisizes more deviation than existed in the original source.  IMO. it would have been more appropriate for the source to start with a larger interaxial to begin with...

Actually, it makes more stereographic "sense" to me to provide a camera array with 3,4, (or even more) lenses, with some having wider separation (even beyond that of the HTC EVO).  Cell phone bodies are easily large enough to do this.  WHY are phone manufactures (including Hydrogen) building phones with such small interaxials??

I beleive the close interaxial of the Hydrogen phone was a major disappointment to almost everyone in this list.

Wayne


Re: 3D history, what makes 3D success?

Nima
 

3D gaming is still available, but from what I could see it was never mainstream.

Most of my contributions here can come from the gaming side.  3D gaming has a long and storied history, including the Famicom 3-D, SegaScope 3-D, Virtual Boy, and some arcade systems.  My friend Eric Kurland also recently told me about the Vectrex 3D system!

There were a variety of VR systems as well in the 90's, but for the sake of this discussion we should probably limit the scope and keep it to strictly 3D solutions.

When the most recent 3D boom occurred, both PlayStation and XBOX jumped on the bandwagon early.  Konami shipped a 3D viewer with Metal Gear Ac!d 1 & 2 on the PSP.  A few firms tried to partner with game companies to make green/purple 3D glasses that would work with any TV, and some major games like a Batman game and Assassin's Creed were compatible with. 

Of course, none were nearly as successful as the Nintendo 3DS.  That said, even that product with amazing 3D that millions of people used was primarily a feature that most people I know turned off, which is unfortunate.  The "New Nintendo 3DS" line came out later with head tracking, which greatly resolved many of the complaints, but by then the damage was done and most people were over the "3D fad".

AMD released their open source "HD3D" system which would work with any 3D TV or monitor.  You could use 3D injection software like Tridef to make the majority of games compatible.  It was great!  It was primarily the system I used to play PC games in 3D.

Nvidia has their proprietary 3D Vision system, which required a compatible Nvidia graphics card, compatible Nvidia Monitor, compatible Nvidia 3D Vision 1 or 2 glasses, and a compatible game.  Despite being wholly proprietary and boldly anti-consumer, this was by far the most popular PC-based 3D system available of all time.  It's still used in the medical industry and military.  The drivers have been "deprecated" last summer, but my understanding is they still work for the time being.  Phereo also inherited the 3D Vision Photo library and web software to display photos, so you can thank Nvidia for that.

I'm still a big fan of 3D and I'm playing Halo: Combat Evolved Anniversary in 3D right now.  If anyone ever wants to play 3D games online or try out some 3D hardware in the Bay Area sometime, let me know!


Re: Rokit Phone limitations

Nima
 

The ratio of information needed to information you have is the same for 2 to 4 as 4 to 8.  Mathematically you should get exactly the same improvement.

That doesn't make any sense?  The ratio between them is irrelevant here, you're ignoring that you have 4x more input data to work with from 4K to 8K vs. 1080p to 4K.  If you were to say the ratio were the same from 1080p to 4K vs. 1080p to 8K I might agree with you...if the algorithms and chipsets were the same, which they aren't.

If you have an existing set that has poor upscaling, it may be possible to get a Blu-Ray box that gives you much improvement at a modest cost.

Are you implying that consumers should run their cable and game systems through their Blu-Ray player?  The vast majority of Blu-Ray players don't even have HDMI input.  No, in my opinion if it's not built-in and done automatically by default by the TV it won't be adopted in any meaningful way.  Just like if there was a magical box that automatically converted monoscopic content to 3D, you wouldn't see consumers clamoring at Best Buy shelves trying to buy them.  If 3D is to succeed, it must be good, easy, and provide no major downsides compared to the pre-existing technology. 


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

Nima
 

The particle wave duality has nothing to do with it.  That is gobbledygook that is likely to appear in an ad. Written to impress the reader.

Or, more plausibly: maybe you just don't know very much about the subject?

Would love to see your peer-reviewed paper that debunks this: https://www.researchgate.net/publication/236070530_A_multi-directional_backlight_for_a_wide-angle_glasses-free_3D_display



Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

Nima
 

The limitation of synthesizing the interior images is that the depth will be flattened when one eye sees an interior image.  However using AI it may be possible to generate exterior images with “plausible” missing detail faked.  When I manually convert 2D to 3D I have to generate missing detail.  Mainly I  extend existing lines and features plausibly.  However if there is a monster behind the object, I cannot know it is there.  I suppose you could have a program that would generate monsters behind objects as part of a 3D extrapolation.

 

The cheezy 3D conversions just stretch boundaries and do not try to guess the hidden details.

That's true, good view synthesis involves inpainting, which we do.  It can not know there is another object that's hidden, but it can draw more grass if there's grass next to the image, draw more wall if there's wall that's supposed to be there, etc.


Re: Pretend Stereoid software cost?

John Rupkalvis
 

FWIW Linux is a free open-source operating system that you can double boot on a Windows computer, so that you can still keep Windows but boot your computer to Linux whenever you want to use it.  There are several versions of Linux.  I use Linux Mint on a computer that I double boot with Windows 10.  You can get some starting info here:  https://www.google.com/search?q=linux+for+pc+free+download&rlz=1C1AVNG_enUS657US657&oq=Linux+for+free&aqs=chrome.5.0l8.15250j0j7&sourceid=chrome&ie=UTF-8

John A. Rupkalvis
stereoscope3d@...

Picture


On Tue, Feb 4, 2020 at 3:32 PM Jim Johnston <jimjohnston333@...> wrote:
I just called Pretend LLC this morning and talked to Alan Edwards about Stereoid software.  I told him what interested me about the program was its ability to adjust depth in post.  I don't know of any other programs that can do that.  I told him I'm a member of this group and that we are mainly hobbyists and want to know how to get the software and how much it costs.  He said that originally the software cost about $1000 but he might be able to sell it to a hobbyist for about half that.  Since 3d has become less popular and less major film companies are interested in 3d they no longer support it.  But he said he'd check with his licensing guy to see if he can still sell it and get back to me.  Once he does that I'll let this group know what he said.  The other bad news is the program only works on Mac or Linus operating systems.  I don't know if I could use it on my windows computer.  They're coming out with a new program that has 3d features and is now in the beta testing phase.  They can have only one or two hundred beta testers so if interested in that go here:http://www.pretendllc.com/

Jim


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

Nima
 

What's with the artifacts at the left and right sides of the petals??

The link above is what we call a "Lightfield Animation", and is actually 20 views generated from 2 or 4 images.  The further you go from the initial views, the higher of a chance there is for artifacts.  We only do this for the 2D animation because it's difficult to see the disparity without an extreme shift, so what you're seeing is something you'd never see in 3D.  This is also compounded by the fact that this image was taken with an unknown 3D camera rig, so the software doesn't have as much information about it as it does with known cameras like the Hydrogen Camera.  I was weighing whether or not to share it with you all because I knew you'd notice that...that's not the point!  If you click the link on a Hydrogen, it will open the post in Holopix and you'd be able to see the marvelous 3D image in all it's glory.  Sometimes I view it in a dark room and marvel at the fact that it looks photorealistic...truly the flower has so much presence your brain tells you that you can reach into the screen and pluck it.

 

 


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

John Clement
 

The particle wave duality has nothing to do with it.  That is gobbledygook that is likely to appear in an ad. Written to impress the reader.  They are doing clever geometric optics which only depends on the wave model of light.  The particle nature of light only comes in when the light is received.  The eye’s sensors need enough photons for you to be able to see the light.  The particle nature or rather quantum nature is important for light sensors because they can only respond to frequencies of light above the cutoff.  The quantum nature of light also is important in the physics of light production. 

 

So they are simulating the effect of the finite area of your pupil.  It receives information from more than one angle because it is not a pinhole camera.  This can be done with a lenticular display, but maybe not as precisely.  This is pure classical geometrical optics which doesn’t need a wave model for explanation.  However if they are using diffraction, then the wave model certainly is being used, but not the quantum nature of light.

 

John M. Clement

-----------------------------------------------------------------------------------------------

The image quality loss may be due to the necessity of devoting extra pixels to one image.  So 4k/4=1k.  I seem to recall it is more like 3k, so you are getting horizontal resolution close to SD television or 720 horizontal.

You're totally right here.  On a 1440p 4-view display, you can only devote 720p worth of pixels to any image at a given time.  But lightfield displays are complicated, the views fold into one another by design, because they're aware of the particle/wave duality of light.  This means that if you're looking at the center view(views 2 and 3) each of your eyes is actually receiving photons from 3 views at a time: your left eye primarily receives photons from view 2, but also receives some from view 1 and 3, while your right eye receives photons from view 3 while receiving some from view 2 and 4.  In many displays, you'd expect this to be seen as crosstalk or ghosting, but lightfield displays actually have a LACK of crosstalk and ghosting compared to other displays!  This is due to your brain doing the heavy lifting(with a bit of help from software tricks like anti-crosstalk) of fusing the three images per eye into one, thus meaning, strictly, that you're actually receiving much more than 720p per eye, which is why many people guess the 3D images are at least 1080p if not more.  This is the core difference between automultiscopic displays(like Looking Glass) and lightfield displays.

There is no free lunch here.

True!  But depending on how you look at it, you may be getting your lunch at a discount ;)


Re: May, 1: A new era in 3D photography?: Sony multi-terminal era

timo@guildwood.net
 

Yes. I saw these some time ago. A good option, but a compromise, to be sure.

Timo

Sent from BlueMail

On Feb 4, 2020, at 6:37 PM, doldi.doldi@... wrote:
Timo: "I spent a month making the plug for the second camera, and destroyed 6 plugs ..."
I have found 90° multiport plugs, "only" 10 mm long, I suppose you already know them?:
https://drones.altigator.com/declencheur-pour-sony-avec-prise-multi-coude-vers-le-bas-p-42300.html

Laurent