Date   

Re: Pretend Stereoid software cost?

timo@guildwood.net
 

On the bright side, Linux is surprisingly effective and easy to use and set up. It also is happy to be on an older, outdated computer that would otherwise be scrap.

Timo

Sent from BlueMail

On Feb 4, 2020, at 6:32 PM, Jim Johnston <jimjohnston333@...> wrote:
I just called Pretend LLC this morning and talked to Alan Edwards about Stereoid software.  I told him what interested me about the program was its ability to adjust depth in post.  I don't know of any other programs that can do that.  I told him I'm a member of this group and that we are mainly hobbyists and want to know how to get the software and how much it costs.  He said that originally the software cost about $1000 but he might be able to sell it to a hobbyist for about half that.  Since 3d has become less popular and less major film companies are interested in 3d they no longer support it.  But he said he'd check with his licensing guy to see if he can still sell it and get back to me.  Once he does that I'll let this group know what he said.  The other bad news is the program only works on Mac or Linus operating systems.  I don't know if I could use it on my windows computer.  They're coming out with a new program that has 3d features and is now in the beta testing phase.  They can have only one or two hundred beta testers so if interested in that go here:http://www.pretendllc.com/

Jim


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

John Clement
 

The limitation of synthesizing the interior images is that the depth will be flattened when one eye sees an interior image.  However using AI it may be possible to generate exterior images with “plausible” missing detail faked.  When I manually convert 2D to 3D I have to generate missing detail.  Mainly I  extend existing lines and features plausibly.  However if there is a monster behind the object, I cannot know it is there.  I suppose you could have a program that would generate monsters behind objects as part of a 3D extrapolation.

 

The cheezy 3D conversions just stretch boundaries and do not try to guess the hidden details.

 

John M. Clement

 

From: main@Photo-3d.groups.io <main@Photo-3d.groups.io> On Behalf Of turbguy
Sent: Tuesday, February 4, 2020 9:18 PM
To: main@Photo-3d.groups.io
Subject: Re: [Photo-3d] The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

 

On Tue, Feb 4, 2020 at 05:08 PM, Bob Aldridge wrote:

As I understand it the Hydrogen 1 creates a depthmap from the two images created with that twin lensed camera. It then synthesises the four images that are used by the display. Notionally, the four images are "extreme left", "left", "right" and "extreme right"".

So, the two "interior" images are SYNTHESIZED from the "exterior" images. 

Not really unique, but could be quite accurate, as sufficient image information is there to perform the synthesis.  

Just don't try to go beyond the "exterior" images, else artifacts will arise (as easily seen in some facebook postings made from ONE view).

Wayne


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

turbguy
 

On Tue, Feb 4, 2020 at 07:57 PM, Nima wrote:
Here's a link!
What's with the artifacts at the left and right sides of the petals??

(crosseye view here).

Wayne


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

turbguy
 

On Tue, Feb 4, 2020 at 05:08 PM, Bob Aldridge wrote:
As I understand it the Hydrogen 1 creates a depthmap from the two images created with that twin lensed camera. It then synthesises the four images that are used by the display. Notionally, the four images are "extreme left", "left", "right" and "extreme right"".
So, the two "interior" images are SYNTHESIZED from the "exterior" images. 

Not really unique, but could be quite accurate, as sufficient image information is there to perform the synthesis.  

Just don't try to go beyond the "exterior" images, else artifacts will arise (as easily seen in some facebook postings made from ONE view).

Wayne


Re: Rokit Phone limitations

John Clement
 

The ratio of information needed to information you have is the same for 2 to 4 as 4 to 8.  Mathematically you should get exactly the same improvement.  Of course improvements in upscaling algorithms will give better results.  AI can put in details that logically might be there, but then maybe not.  Anther reason for better upscaling may be improvement in the original signal.  I am pretty sure that the original upscaling used simple sharpening algorithms which produce visible artifacts.  With faster processing deconvolutions can be used.  If you know the characteristics of the smearing in the original a deconvolution can be very exact.  Also greater bit depth say 10 rather than 8 allows for better results with lower noise.  Deconvolution trades sharpess for increased noise.  You can’t recover information that is gone, but you can fake it.  It is also possible that the newer upscaling is doing some processor intensive work based on what the human eye can not perceive.  There has been much research in that, and it is ongoing.

 

If the newer sets use the better algorithms for upscaling from conventional HD, I am sure you will see much better improvement than in the older sets.  But even there you have to tweak the algorithms to suit the source, and make different tradeoffs for artifact visibility.  Unfortunately many video sources are now stuck with older upscaling.  This happened in SD TV.  With time better methods of transferring film to vide were developed, but the stations still continued to use the older equipment and tapes rather than upgrade.  Only some of the most marketable movies were upgraded, and many other worthy productions are still only available in smudgy copies.

 

If you have an existing set that has poor upscaling, it may be possible to get a Blu-Ray box that gives you much improvement at a modest cost.   Actually most people are just not that picky, so the incremental improvements may not excite them.  Then there are women who have 4 color primaries and can see colors men will not, but the current technology only caters to the standard 3 colors.  I don’t know if those women are perturbed by the shallow color reproduction.

 

John M. Clement

 

 

Now why should 4k to 8k upscale look better than 2k to 4k?

There are a variety of reasons for that in particular, namely: 4K to 8K has 4x as much information to work with when upscaling when compared to 1080p to 4K.  In addition, 4K upscaling was a popular feature at the advent of 4K TVs, so the chips and algorithms were limited to what was available at the time, which, in my opinion, looked mostly indistinguishable from 1080p, compared to 4K content which was clearly different from 1080p at most normal viewing distances.  I haven't seen any notable improvements in 4K upscaling for the last ~5 years, but I'd love to learn more if you've seen anything in regards to that.

_._,_._,_


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

Nima
 

The image quality loss may be due to the necessity of devoting extra pixels to one image.  So 4k/4=1k.  I seem to recall it is more like 3k, so you are getting horizontal resolution close to SD television or 720 horizontal.

You're totally right here.  On a 1440p 4-view display, you can only devote 720p worth of pixels to any image at a given time.  But lightfield displays are complicated, the views fold into one another by design, because they're aware of the particle/wave duality of light.  This means that if you're looking at the center view(views 2 and 3) each of your eyes is actually receiving photons from 3 views at a time: your left eye primarily receives photons from view 2, but also receives some from view 1 and 3, while your right eye receives photons from view 3 while receiving some from view 2 and 4.  In many displays, you'd expect this to be seen as crosstalk or ghosting, but lightfield displays actually have a LACK of crosstalk and ghosting compared to other displays!  This is due to your brain doing the heavy lifting(with a bit of help from software tricks like anti-crosstalk) of fusing the three images per eye into one, thus meaning, strictly, that you're actually receiving much more than 720p per eye, which is why many people guess the 3D images are at least 1080p if not more.  This is the core difference between automultiscopic displays(like Looking Glass) and lightfield displays.

There is no free lunch here.

True!  But depending on how you look at it, you may be getting your lunch at a discount ;)


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

Nima
 

As I understand it the Hydrogen 1 creates a depthmap from the two images created with that twin lensed camera. It then synthesises the four images that are used by the display.

This is true for all SBS content on the device, including content shot with the Hydrogen camera.

Notionally, the four images are "extreme left", "left", "right" and "extreme right".

This isn't totally correct.  Whether the original SBS are the inside two or outside two is based on the image and subject in focus, and the algorithm decides based on that.  You could just as easily end up with "Left", "Slightly Left", "Slightly Right", and "Right.

Unfortunately, to my eyes, all this image processing results in some loss of image quality. 

There may be a potential loss in quality depending on the quality of the input image(you definitely can't view an SBS 16K image on a mobile phone), but try not to conflate the first-generation display with the always-improving interpolation algorithm.  You may not like what you've seen due to just one or just the other.  Ideally, when you feed in an SBS, your original image is untouched and you will just end up with twice as much images(from two to four).

Technically, the system could compensate for the tiny lens separation on the H1 (looks like less than half an inch) and thus the "extreme" left and right views could exhibit similar parallax to stereo pairs with lens separation similar to a person's eye separation. But, looking at images showcased on the H1's "official" channels, this doesn't seem to happen. Even on the Hydrogen's much hyped display the images still seem to exhibit reduced depth. Well, to my eyes, anyway.

Yes, the camera is doing a crop and alignment which virtually increases the baseline(actually, the sensors are more than 4.5K, but you end up with 4K images due to this), but this is a minor effect.  The bigger effect comes from our algorithms being aware of the Hydrogen baseline and compensating for that.  That's why the images look deeper and parts stick out of the screen more on the Hydrogen then they do when exported as raw SBS.  That said, if you're not excited about what Hydrogen images look like, you should check out some of the hyperstereo images and photos taken with W3 and other stereo rigs on the Hydrogen display: they look amazing!  There's a blue flower that won one of our photo contests in 2019 which reminds me of a hologram...it looks so insanely real, like you can reach out and pluck it...

 

Here's a link!


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

Nima
 

"How is that performed, with only two cams available on the Hydrogen phone??  And in what cases is it "not really"?

Well, as mentioned in the prior post, "H4V" isn't really a real thing.  If you put in an SBS, whether shot on Hydrogen or not, it will generate two synthesized views.  But it doesn't have to do that.  All real time content, e.g. games and apps, use four virtual cameras in the game or app and actually sample four images per frame, 60 times per second.  In addition, you can just use four images from a Nimslo, Nishika, or draw them yourself in Photoshop if you'd like to create content yourself.  Or, make a rig of four cameras!  Some have even made rigs of 4 Hydrogens, though I'd recommend using a much larger sensor if you're going through all that work anyway.

The other point I made however is that not all image interpolation is created equally...if you think technology in 2020 is the same as it was in 2012 then you've got another thing coming!


Re: Rokit Phone limitations

Nima
 

Now why should 4k to 8k upscale look better than 2k to 4k?

There are a variety of reasons for that in particular, namely: 4K to 8K has 4x as much information to work with when upscaling when compared to 1080p to 4K.  In addition, 4K upscaling was a popular feature at the advent of 4K TVs, so the chips and algorithms were limited to what was available at the time, which, in my opinion, looked mostly indistinguishable from 1080p, compared to 4K content which was clearly different from 1080p at most normal viewing distances.  I haven't seen any notable improvements in 4K upscaling for the last ~5 years, but I'd love to learn more if you've seen anything in regards to that.

On the other hand, 8K upscaling is a requirement of the 8K spec, unlike the 4K spec.  8K also requires HDR too, and firms like Samsung are artificially limiting the highest-end upscaling chips and software to their 8K TVs and choosing not to implement them into their 4K TVs.  The 8K upscaling looks better than 4K upscaling due to matters of capitalism, not technical limitations.  But to a consumer, it still looks noticeably better, regardless of the reason.

Some observers have said that TVs do a better job of upscaling from 480 3ofpsi to 1080 60P by bobbing the interlaced 480material and displaying at 60Hz.

Another reason why 8k upscales better may be because you can’t visually resolve the pixels as well as in 4k, so the defects are less noticeable.  Then it might be that applying the same algorithm for 4k to 8k to upscale 2k the quality jump will be just as good.  The key to upscaling is to have a source that upscales well.  I have tried various parameters when creating a DVD to see how well it upscales to HD, and there are definitely things that work better.  Refocusing is an important parameter.  Making it look really good at SD may make it look worse at HD.  It certainly also depends on the upscaling algorithm.

All true!  But I think every firm is going to have their own secret sauce, as that makes the difference.  I've been blessed with being able to try so many high-quality 8K panels at CES the last few years, and IMHO, Samsung steals the show every year for all real consumer use-cases(not digital signage or enterprise displays).


Re: Rokit Phone limitations

John Clement
 

Now why should 4k to 8k upscale look better than 2k to 4k?  The same algorithms should work.  Yes, better algorithms driven by AI will do a better job.  The problem may lie is upscaling from interlaced to noninterlaced.   Some observers have said that TVs do a better job of upscaling from 480 3ofpsi to 1080 60P by bobbing the interlaced 480material and displaying at 60Hz.

Another reason why 8k upscales better may be because you can’t visually resolve the pixels as well as in 4k, so the defects are less noticeable.  Then it might be that applying the same algorithm for 4k to 8k to upscale 2k the quality jump will be just as good.  The key to upscaling is to have a source that upscales well.  I have tried various parameters when creating a DVD to see how well it upscales to HD, and there are definitely things that work better.  Refocusing is an important parameter.  Making it look really good at SD may make it look worse at HD.  It certainly also depends on the upscaling algorithm.

John M. Clement

 

You're quite right in general, but I think you're being a bit pessimistic.  8K is a small bump in quality, diminishing returns from 4K.  It's better, and noticeably better at that, especially at large sizes, BUT it is not as great of a jump as 1080p to 4K was.  Instead, 8K has a killer feature that 4K did not: AI/ML-powered upscaling that ACTUALLY works.  Realtime ML for video was not cheap and easy when 4K began to come out, so converted content looked bad.  8K-upscaled content looks amazing!  Even old black and white movies look amazing in 8K because of this!

Believe it or not, this is going to be the determining factor as to whether 3D and lightfield displays will take off in the living room on television sets: can discrete graphic chipsets and machine learning be used to convert pre-existing 2D content into 3D content on the fly?  If it failed when they tried to do 3D conversions in the early 2010's, why would it be different this time?  Because a decade of advancing machine learning algorithms is a LONG time, and now, it finally works ;)


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

John Clement
 

The image quality loss may be due to the necessity of devoting extra pixels to one image.  So 4k/4=1k.  I seem to recall it is more like 3k, so you are getting horizontal resolution close to SD television or 720 horizontal.  There is no free lunch here.

 

John M. Clement

 

From: main@Photo-3d.groups.io <main@Photo-3d.groups.io> On Behalf Of Bob Aldridge
Sent: Tuesday, February 4, 2020 5:08 PM
To: main@Photo-3d.groups.io
Subject: Re: [Photo-3d] The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

 

As I understand it the Hydrogen 1 creates a depthmap from the two images created with that twin lensed camera. It then synthesises the four images that are used by the display. Notionally, the four images are "extreme left", "left", "right" and "extreme right".

Unfortunately, to my eyes, all this image processing results in some loss of image quality.

Technically, the system could compensate for the tiny lens separation on the H1 (looks like less than half an inch) and thus the "extreme" left and right views could exhibit similar parallax to stereo pairs with lens separation similar to a person's eye separation. But, looking at images showcased on the H1's "official" channels, this doesn't seem to happen. Even on the Hydrogen's much hyped display the images still seem to exhibit reduced depth. Well, to my eyes, anyway.

Bob Aldridge

On 04/02/2020 22:48, turbguy wrote:

"This isn't really true.  In many cases, it really is 4 unique images, especially in all real-time content."

How is that performed, with only two cams available on the Hydrogen phone??  And in what cases is it "not really"?

Wayne


Re: May, 1: A new era in 3D photography?: Sony multi-terminal era

Laurent DOLDI (Toulouse, France)
 

Timo: "I spent a month making the plug for the second camera, and destroyed 6 plugs ..."
I have found 90° multiport plugs, "only" 10 mm long, I suppose you already know them?:
https://drones.altigator.com/declencheur-pour-sony-avec-prise-multi-coude-vers-le-bas-p-42300.html

Laurent


Re: Pretend Stereoid software cost?

Jim Johnston
 

I just called Pretend LLC this morning and talked to Alan Edwards about Stereoid software.  I told him what interested me about the program was its ability to adjust depth in post.  I don't know of any other programs that can do that.  I told him I'm a member of this group and that we are mainly hobbyists and want to know how to get the software and how much it costs.  He said that originally the software cost about $1000 but he might be able to sell it to a hobbyist for about half that.  Since 3d has become less popular and less major film companies are interested in 3d they no longer support it.  But he said he'd check with his licensing guy to see if he can still sell it and get back to me.  Once he does that I'll let this group know what he said.  The other bad news is the program only works on Mac or Linus operating systems.  I don't know if I could use it on my windows computer.  They're coming out with a new program that has 3d features and is now in the beta testing phase.  They can have only one or two hundred beta testers so if interested in that go here:http://www.pretendllc.com/

Jim


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

Bob Aldridge
 

As I understand it the Hydrogen 1 creates a depthmap from the two images created with that twin lensed camera. It then synthesises the four images that are used by the display. Notionally, the four images are "extreme left", "left", "right" and "extreme right".

Unfortunately, to my eyes, all this image processing results in some loss of image quality.

Technically, the system could compensate for the tiny lens separation on the H1 (looks like less than half an inch) and thus the "extreme" left and right views could exhibit similar parallax to stereo pairs with lens separation similar to a person's eye separation. But, looking at images showcased on the H1's "official" channels, this doesn't seem to happen. Even on the Hydrogen's much hyped display the images still seem to exhibit reduced depth. Well, to my eyes, anyway.

Bob Aldridge

On 04/02/2020 22:48, turbguy wrote:
"This isn't really true.  In many cases, it really is 4 unique images, especially in all real-time content."

How is that performed, with only two cams available on the Hydrogen phone??  And in what cases is it "not really"?

Wayne


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

turbguy
 

"This isn't really true.  In many cases, it really is 4 unique images, especially in all real-time content."

How is that performed, with only two cams available on the Hydrogen phone??  And in what cases is it "not really"?

Wayne


Re: The future of 3D: VR, AR, Automultiscopic & Lightfield Displays and Images #futureOf3D #viewing

Nima
 

Well Vlad, you clearly know what you're talking about!  You're not wrong that the quality of the phone, software, and support didn't meet most people's expectations...myself included.  That said, it is the first compatible product in a 3D/lightfield App Store ecosystem, so the device has a lot of value as a development kit as well as a creative playground, even if it can't serve as a reliable day-to-day phone for most people.  

I do want to add a few points of contention however:

3. The rear cameras are an average off-the-shelf camera module - everybody was expecting much higher Image Quality that RED is famous for.

Totally true, but that said, the camera sensors are not low-quality by any means!  They are high quality Sony sensors.  When it was released, the only two better cameras you could shoot with were the top-end Galaxy and the newest iPhones.  And those two can't shoot 3D!

4. And the worse of all for us 3D enthusiasts, the useless new proprietary, so called 4-view format for the images.

The format is a hard requirement for real-time software due to the optical properties of the display, but not for photos and videos!  RED rolled back their requirement for it all to be proprietary and released an update that allows you to capture both photos and videos in regular, standard SBS.  In fact, this is how you get the best quality out of the 4K Sony sensors!  You can also easily open half and full width SBS content on the device and have it play back instantaneously without any conversion needed!

I wrote "so called 4-view format", because in reality it's just a regular L&R pair (from a dual lens cam module) with extra two interpolated frames added in auto-postprocessing.

This isn't really true.  In many cases, it really is 4 unique images, especially in all real-time content.  In addition, you could just view true "Quad" images on the device as well.  If you're familiar with Nimslo or Nishika cameras, we actually have photographers on Holopix who have converted those 4-cam images into Quads and uploaded them to Holopix.  They look fantastic, by the way!  And I do have to take offense to "extra two interpolated frames added in auto-postprocessing".  Not all software/conversion is created equal!  I've seen a whole lot of it...including newer companies like Magic Display and Mopic who were showing off some of their content at CES.  And while Mopic is doing incredibly when it comes to their iPad tablet 3D, their PC 3D was just awful and would hurt anybodies eyes.  And that's WITH head tracking!  Magic Display was just a blurry mess with pseudo-3D everywhere.  In most cases, if I were to extract two random adjacent views from a 4-view image and show them to you on a traditional 3D display, you would not be able to tell if they are true stereo, one interpolated and one stereo, or both interpolated.  I guarantee it!  There are of course visual flaws in some images in some cases, but that's getting better all the time.

Yes, nowadays we can buy a Hydrogen-1 phone for less than $200, but it's practically useless for us. Many of us already have tens of thousands of stereoscopic images in our digital collections in legacy formats and I bet nobody wants to convert them to H4V just to be able to view them on only one device. Why a native photo/video viewer app running on this phone doesn't support existing stereoscopic formats?

Why, my dear friend, but it does!  SBS is fully supported on Hydrogen!  The only reason MPO isn't natively supported is that it's a tiny fraction of the 3D content and enthusiasts can easily convert from MPO to SBS themselves.  It is interesting though that many 3D enthusiasts now love and embrace the MPO standard...I remember when everyone was complaining about this "new and incompatible" standard that a small association created and Japanese companies like Fujifilm and Nintendo were trying to push ;)

H4V format doesn't introduce any new value, while MPO (being Multi Photo object) can truly contain multiple images in one encapsulation.

Let me tell you a secret: there's no such thing as the "H4V format".  There are Quads, SBS, and Leia Image Format files, which are all called "H4V" by RED for marketing reasons and to make it seem like it's a single proprietary system.  And yes, there are benefits for Leia Image Format vs. MPO, namely, that Leia Image Format images can be .jpg or .png and shared as a 2D image to non-3D users.  Many programs don't recognize .MPO files and thus you can't easily share them with others.

And all that brings me to Holopix ... I just went to that web page and it does nothing ... I can't view any images there ... it's again a closed society accessible only to Hydrogen-1 phone owners. Why Holopix doesn't show hosted images the way Phereo and Stereopix does (in any convenient format, including L&R parallel or cross pairs). Do we really need another limited proprietary platform that goes nowhere? Why not being more inclusive and target a much wider user base? It's nothing wrong with continuing to support a single source of H4V content, but please also include the rest of 3D sources in your environment.

If we did all of that all at once, it wouldn't have left us with much to do in 2020 now would it? ;)


Re: Rokit Phone limitations

Nima
 

There is no such thing as 2DVR.  Virtual reality is the simulation of reality.  People have two eyes, but they do not have eyes in the back of their head.  It is not necessary to have a 360 degree FOV to be called VR, but it is necessary that VR must be stereoscopic.  360 degree 2D images are just surround video or surround photography, not VR.    
It very much depends on where your nomenclature and taxonomy of defining VR comes from.  If you search Google for the definition of VR, Google's definition states that VR requires a computer simulated space, and thus stereoscopic 180 and 360 photos and videos are ALSO not VR.  I generally tend to agree.  But not being true VR =/= not valuable in it's own right.


Re: Rokit Phone limitations

Nima
 

 The 24 multiview might work on a phone, but consider that it reduces 4K down to SD/6 or approximately 100 pixels across.

Which device has 24 view?  Hydrogen is 16 views, but shows 4-views duplicated 4 times at any given time, and uses the other views when you rotate to portrait orientation.  Hydrogen is 4x views of 720p worth of pixels each.  So it's a higher pixel density 3D image than what you're getting on 1080p TV's and monitors.

 While the Multiview does help the sweet spot, there are still angles with transition jumps.

That's true!  It's not perfect...yet.  But it is already an entire generation better compared to lenticular 3D screens in that regard.  2 small ~20 degree view inversion zones between stereoscopic view zones leaves ~140 degrees of viewable screen out of the 180 possible on the display.


3 people would be annoyed by trying to find the sweet spots.  Any inconvenience just turns them off.  Convenience rules.

Convenience does rule!  Which is why having the repeated views is so much better than a single 3D view.  I will admit, a minority of people don't want to find a view and watch, but for many, they find that their natural angle of view(over the shoulder of their friend) is already in the perfect spot to view the 3D display.  Lightfield displays are a world better for multiple viewers than either lenticular displays or parallax barrier displays can ever be(and are much more similar to printed lenticular photos in that regard).

To make 3D mainstream it has to fill a void and people have to be convinced that it is desirable.  When people come and squeal with delight that you have a 3D TV, then it will be mainstream.  You have to have both devices and loads of content that people want.  Looking at mainstream new 3D is basically invisible.  It might survive longer on gaming, but I think that may also be fading.  8k has a better chance because it does not require special equipment such as glasses, and it is compatible with existing content.  But will it survive?  It can sell once the existing 2K sets are gone and it is the only alternative.  3D doesn’t have that type of survivability.

You're quite right in general, but I think you're being a bit pessimistic.  8K is a small bump in quality, diminishing returns from 4K.  It's better, and noticeably better at that, especially at large sizes, BUT it is not as great of a jump as 1080p to 4K was.  Instead, 8K has a killer feature that 4K did not: AI/ML-powered upscaling that ACTUALLY works.  Realtime ML for video was not cheap and easy when 4K began to come out, so converted content looked bad.  8K-upscaled content looks amazing!  Even old black and white movies look amazing in 8K because of this!

Believe it or not, this is going to be the determining factor as to whether 3D and lightfield displays will take off in the living room on television sets: can discrete graphic chipsets and machine learning be used to convert pre-existing 2D content into 3D content on the fly?  If it failed when they tried to do 3D conversions in the early 2010's, why would it be different this time?  Because a decade of advancing machine learning algorithms is a LONG time, and now, it finally works ;)

 

 


Re: Rokit Phone limitations

Nima
 

I cannot agree.
 
It is like if you propose a new stereoscope, claim that it works better with standard stereocards thanks to a magical optical system that create multiview from the pair, but you have to use a special card format. Then we would have to blame the printers to not use your format, even if you give the template for free if they asked. The analogy with physical world is limited, but it gives an idea of my objection.
You have a website that doesn't support Netscape Navigator!  Why is that?  Could it maybe be that you only support newer, better browsers that have deprecated support for older standards so that they could move forward and provide a better experience overall?
 
There are things that are easily supported with backwards compatibility and things that fundamentally can not be.  Leia's devices support half width and full width SBS because we have software that can convert it on the fly from stereo to multivew.  The same is not true for interlaced apps and websites.  Believe me, no one is trying to push proprietary technologies forward for no reason.  The fact is just that sometimes you can't move forward with a better product without dropping support for older technology.  Same reason why Apple has dropped support for PowerPC apps in MacOS and 32-bit apps in iOS.
 
Working with a SDK is a bit similar. You have to adapt your application to the way the SDK is shaped. If you want to adapt to multiple SDK, you have to create special code for every one, and a machinery to switch to the right one. It is not easy and sometimes impossible. And this is only if the developers continue to work on the application, and additionally accept to spend time on this compatibility on a device they possibly do not own.

No one should ever force developers to support platforms they don't want.  That said, look at VR companies like Against Gravity: they support Oculus Mobile, Oculus Desktop, OpenVR, PlayStation VR, and even iOS apps.  This is a small 12 person company!  Or look at Virtual Desktop, which supports GearVR, Oculus Go, Oculus Quest, Oculus Rift, HTC Vive, and Windows MR.  It's one man!  The legendary Guy Godin is able to develop an industry-leading app which even supports features like 3D photo and video viewing all by himself.  However, of course, not everyone is a genius developer like Guy.  I know my personal projects are significantly less impressive than those.  My VR game only supported Oculus Rift and HTC Vive, which is only two systems, and I totally agree with you that's it's difficult!  But ultimately, it is up to the developers to make the choice on whether or not to move forward and add support.  There's no real alternative to this.

Leia also indicated that there exists a SDK but Nic did not provide a link to its documentation neither.

That's strange!  Here you go: http://developer.leialoft.com
I would be happy that the app can support those devices as well if it was easy to integrate, but I will not be proactive since I do not own them [by the way, there is no need for special occasion to send gifts 😉].

I totally agree!  Would love to have a chat and see if your birthday can come early this year.

Based on what I saw, the best would be that apps draw as usual (SBS or interleaved format for example) and that the system convert it transparently in the format of the display.

If only this was possible!  Though SBS would potentially work with some small graphical issues, interleaved has a variety of issues, as de-interlacing, doing multiview generation, and then re-interlacing for our display in a single frame(less than 16ms) is not possible even on the world's best mobile chipsets.  That also ignores that the app still needs to communicate with the backlight driver to turn 3D mode on and off, which can't be done without our SDK anyway.

In other words, that the usage of the standard stereocard format is as transparent as possible, and publicly documented so that it is still available to new apps when the company stops product support.

SBS is a great format for enthusiasts.  If enthusiasts expect regular consumers to stare at a double image, press a button somewhere to switch to 3D mode, view their 3D image, and then press a button to switch back to 2D every time they want to view 3D content, then no wonder 3D has failed to take off in the mainstream all this time!


3D Rarities Vol 2 can be pre-ordered - to be released 3/24/2020

KenK
 

Just got an announcement for this today ...

3D Rarities Vol 2 can be pre-ordered - to be released 3/24/2020

From the website, Pricing - "was $39.95, now $29.95 (Save 25%)" plus shipping.
(I don't know how there is a "was" price since it hasn't been released yet but.....)

https://www.flickeralley.com/classic-movies-2/#!/3-D-Rarities-Volume-II/p/170617337/category=20414531

I have nothing to do with Flicker Alley and thought this was news worthy so I posted here rather than sell-3d