Date   
Google Locked Red Hydrogen One

robert mcafee
 

I have seen some Red Hydrogen One phones for sale on eBay and elsewhere that were advertised as Google Locked.  I understand there is software available to unlock phones but may not be available / work for all models.

Can anyone advise if a Google Locked Red Hydrogen One can be unlocked without the original google account information?

Flightdeck Phone???

robert mcafee
 

Saw this auction listing on eBay for a 3D phone from Flightdeck (who also brought us the rebadged Truly 3D tablet as the Freevi Flightdeck Commander).
https://www.ebay.com/itm/3D-Smartphone-first-ever/324014956206?hash=item4b70cbdeae:g:4TkAAOSwkxtd-AUD 

Has anyone every heard of the 3D phone from Flightdeck?  Note their logo on the back of the phone and the box.

Re: IPD vs Taking spacing. First true 3D camera announced for 2020

bglick97@...
 

>  But my point is that this "real world" effect at that distance has so little deviation that, to most people, it is indistinguishable from a 2D picture of the same scene.  Again, stereo enthusiasts will look for the tiny bit of depth while most people will not.

            Again, at 20ft nears, we have completely different experiences...even 30ft nears to infinity have overwhelming depth... again, variables exist which make these discussions a bit senseless, acuity of our eyes, resolution of our taking lenses, resolution of our taking medium, resolution of viewing lenses, and the brightness of the lighting.  Its a complete optical chain, every variable matters, it only takes one weak link that provides a limitation to the entire chain.    In addition,   I developed a lighting system that was so bright, it  drove the pupil diameter down to its  min diam, 3-4mm on avg.   The only sharp portion of our eyes MTF curve is when the pupil is less than 4mm diam.   So hopefully we can agree, our experiences are different, because our taking and viewing equipment was prob. different, as well as, our visual acuity could be much different.    
           Since you are well versed at math, take the foveal resolution (similar to pixels / mm), run the math for deviation and determine how much deviation human eye spacing can deliver and be perceptible.  U would be amazed vs. the numbers you are tossing around.


>  I personally never found anyone who could determine the interaxial set in a picture just from looking at it unless the base was massive.

                    I know I never stated this.  Maybe someone else made this assertion, and you were commenting to them.  dunno...

>   She argued she sees just the same with one eye as she did with two !  That shows you how much people actually notice depth in every day life.
                    Lots of vision books available that explain this in detail... there is lot more 3d cues our brain has, not just deviation.  And yes, that is why sometimes I put a 2d view in my viewer and people can still sense depth.... again, this is superb imagery with massive backlighting... do they sense the same depth, of course not, but its still noticeable.  As I wrote on the forum many times, the greatest 3d viewing experience I ever had in my life was viewing a 2d video in a half dome.   This was the greatest demonstration I ever experienced proving the added cues our brain uses to sense depth.   But deviation is still one of the strongest cues available, but certainly not the only one.  

>  You told me a lot more than you seem to remember -  including sending me a schematics of your incredibly complex viewer.

              How would you know everything I was working on??   You must be telepathic.  Yes, I remember sending you a few ray traces , but you saw maybe 5% of the 3d projects I worked on over a 4 yr period...  I think I would know this better than you ;)

>   It may not have been the best, but at least it showed what could be done and it was going to be accessible, 

            I disagree with your assessment.  This is quite the stretch.  Its like stating, hey a ViewMaster is step one, this shows we will eventually be able to mimic human vision in future small light weight viewers.    In some fields, getting the last 10% of a design complete, never occurs despite millions invested and decades.  I know of a lot of optical products the military tried to design, whereas this was the case, and remains the case.  I am not knocking ERic in anyway, he was beyond a pioneer.  I have such respect for his diligence, motivation, drive, etc., specially with such limited resources.  I wish there was more Erics out there today!

As for the complete taking and viewing system...of course the cameras can be conquered, as its nothing more than 2d cameras synced.  IMO, if a killer, cost effective viewer system was to hit the market, we would see many camera makers jump in the market.   When you consider Facebook has dumped hundreds of millions (prob billions now)  in RnD in the Oculus, and the current evolution is how far its developed, this demonstrates just how complex close 3d viewing is.  I am grateful for these high tech companies, gamers, etc that drove the technology this far.   Just hope it continues....








On Wed, Jan 15, 2020 at 11:51 AM depthcam via Groups.Io <depthcam=yahoo.ca@groups.io> wrote:

> IF the resolution of the taking lenses, capture media and viewing system is sufficient, 20ft nears will produce the same depth effect in the viewer as it does in the real world.


But my point is that this "real world" effect at that distance has so little deviation that, to most people, it is indistinguishable from a 2D picture of the same scene.  Again, stereo enthusiasts will look for the tiny bit of depth while most people will not.


> But I have shot many nears at 40ft, and the depth effect is still overwhelming.


I guess we have different perceptions of what constitutes an overwhelming picture.  It sounds more like your viewing system is what produces the appeal.  I remember you telling me that people were as impressed viewing a 2D picture in your viewer as viewing a 3D one.

> As always, the devil is in the details, so hard to throw out blanket statements like this.

Just reporting David Burder's research.  I personally never found anyone who could determine the interaxial set in a picture just from looking at it unless the base was massive.


> This is what makes non ortho so  difficult for sharing images...again, for personal consumption, anything goes.


Not a problem if you know your math.  it's all about presenting an amount of deviation to the eyes that is comfortable.  The rest is left to one's creativity.  I think where you and I differ is that you are trying to reproduce real world viewing.  I am not.  Most people wake up every day seeing in 3D and it's only when things look "different" that they take notice.  My sister lost sight in one eye a few years ago and one day she told me that her doctor had said she would no longer see in 3D.  She asked me what was that all about.  She argued she sees just the same with one eye as she did with two !  That shows you how much people actually notice depth in every day life.


> Its NOT about seeing depth, its about completely un natural views, such as,  why do those trees look 2 inches tall?? 


But that's exactly what I like.  Trying to move away from "normal" viewing.  Doing things with images that contradict reality. Not all the time, mind you.  But some of the time.  My own mother didn't care much for 3D until in 1986 I showed her some night shots of light paintings I had taken at Expo 86.  That's when she went "wow" !


> Again, its not just the base, its the near and far distances.


Of course.  That was my point about the math and the programs I wrote back then to calculate it. Pompey Mainardi was a math teacher and a specialist in 3D calculations.  He was my mentor.


> The reason the LEEP system, or even current VR does not become more mainstream (IMO), is because the IQ is poor.


That's one reason.  But the greater one is people don't like to strap on what feels like a large diving mask on their heads to view pictures !  Heck, they didn't even like having to wear "sunglasses" to watch TV !


>  Not sure how you knew all the research I did??  I do remember sharing a "few" things with you, but certainly not all.


You told me a lot more than you seem to remember -  including sending me a schematics of your incredibly complex viewer.  The point you made at the time was how very expensive such a viewer would be.  The point I made is that Eric Howlett found a way around that by using cheap uncorrected fisheye lenses both in the camera and the viewer.  It may not have been the best, but at least it showed what could be done and it was going to be accessible,  Unfortunately, he was a one-man operation with some help from friends but little funding.  Maybe something better could have been achieved, had a camera manufacturer taken over production.  But at least we got to see very early on what the potential was.

> "a complete taking and viewing system" is my holy grail...

I agree that the manufacturers who have made VR180 cameras so far have left the viewing choice to the individual.  That's probably to reduce costs and also because they know the buyers are for a large part already owners of a viewing device such as the Oculus Go.  Mind you, Lenovo did offer a Mirage headset to go with their Mirage camera.  However, most people just bought the camera, which cost less than the viewer !

Francois


Re: VK World.

Harry Richards
 

Thank you. I did not realize that you needed to leave it on all day. Even an old codger lie me can learn something new

I thought getting old would take longer
Harry Richards

On Jan 15, 2020, at 2:11 PM, depthcam via Groups.Io <depthcam@...> wrote:

Phone or tablets can either be "powered off" or put in "sleep" mode.

When you power off your phone (via a long press of the power button), it is the same as turning your computer off.  Next time you turn it one, it must go through a "booting" process.

When used as a phone, you must leave the device on and turn it off or on simply with a quick press of the power button.  That's what's called the "sleep" mode.  It's necessary when using your device as a phone since it cannot receive calls when completely powered off.

Francois

Re: VK World.

depthcam
 

Phone or tablets can either be "powered off" or put in "sleep" mode.

When you power off your phone (via a long press of the power button), it is the same as turning your computer off.  Next time you turn it one, it must go through a "booting" process.

When used as a phone, you must leave the device on and turn it off or on simply with a quick press of the power button.  That's what's called the "sleep" mode.  It's necessary when using your device as a phone since it cannot receive calls when completely powered off.

Francois

Re: IPD vs Taking spacing. First true 3D camera announced for 2020

depthcam
 

> IF the resolution of the taking lenses, capture media and viewing system is sufficient, 20ft nears will produce the same depth effect in the viewer as it does in the real world.


But my point is that this "real world" effect at that distance has so little deviation that, to most people, it is indistinguishable from a 2D picture of the same scene.  Again, stereo enthusiasts will look for the tiny bit of depth while most people will not.


> But I have shot many nears at 40ft, and the depth effect is still overwhelming.


I guess we have different perceptions of what constitutes an overwhelming picture.  It sounds more like your viewing system is what produces the appeal.  I remember you telling me that people were as impressed viewing a 2D picture in your viewer as viewing a 3D one.

> As always, the devil is in the details, so hard to throw out blanket statements like this.

Just reporting David Burder's research.  I personally never found anyone who could determine the interaxial set in a picture just from looking at it unless the base was massive.


> This is what makes non ortho so  difficult for sharing images...again, for personal consumption, anything goes.


Not a problem if you know your math.  it's all about presenting an amount of deviation to the eyes that is comfortable.  The rest is left to one's creativity.  I think where you and I differ is that you are trying to reproduce real world viewing.  I am not.  Most people wake up every day seeing in 3D and it's only when things look "different" that they take notice.  My sister lost sight in one eye a few years ago and one day she told me that her doctor had said she would no longer see in 3D.  She asked me what was that all about.  She argued she sees just the same with one eye as she did with two !  That shows you how much people actually notice depth in every day life.


> Its NOT about seeing depth, its about completely un natural views, such as,  why do those trees look 2 inches tall?? 


But that's exactly what I like.  Trying to move away from "normal" viewing.  Doing things with images that contradict reality. Not all the time, mind you.  But some of the time.  My own mother didn't care much for 3D until in 1986 I showed her some night shots of light paintings I had taken at Expo 86.  That's when she went "wow" !


> Again, its not just the base, its the near and far distances.


Of course.  That was my point about the math and the programs I wrote back then to calculate it. Pompey Mainardi was a math teacher and a specialist in 3D calculations.  He was my mentor.


> The reason the LEEP system, or even current VR does not become more mainstream (IMO), is because the IQ is poor.


That's one reason.  But the greater one is people don't like to strap on what feels like a large diving mask on their heads to view pictures !  Heck, they didn't even like having to wear "sunglasses" to watch TV !


>  Not sure how you knew all the research I did??  I do remember sharing a "few" things with you, but certainly not all.


You told me a lot more than you seem to remember -  including sending me a schematics of your incredibly complex viewer.  The point you made at the time was how very expensive such a viewer would be.  The point I made is that Eric Howlett found a way around that by using cheap uncorrected fisheye lenses both in the camera and the viewer.  It may not have been the best, but at least it showed what could be done and it was going to be accessible,  Unfortunately, he was a one-man operation with some help from friends but little funding.  Maybe something better could have been achieved, had a camera manufacturer taken over production.  But at least we got to see very early on what the potential was.

> "a complete taking and viewing system" is my holy grail...

I agree that the manufacturers who have made VR180 cameras so far have left the viewing choice to the individual.  That's probably to reduce costs and also because they know the buyers are for a large part already owners of a viewing device such as the Oculus Go.  Mind you, Lenovo did offer a Mirage headset to go with their Mirage camera.  However, most people just bought the camera, which cost less than the viewer !

Francois


Re: IPD vs Taking spacing. First true 3D camera announced for 2020

bglick97@...
 

> Give a newb a MF viewer with ortho views, and the avg. person is freaked out by the realism of viewing a captured scene with life-like depth, that is the appeal.

For sure... IF the picture itself is appealing. 

         This has not been my experience at all.  I can show a newb a MF 3d pix of the inside of my garage, and they cant stop looking at it.  This assumes they have the 3d gene.  We all know, those who don't see, or appreciate depth, are not impressed.  


 The problem I see and have seen for years has been for people to argue for orthostereoscopy and then take pictures of mostly flat scenes or where most of the subject matter is twenty feet or more away.  Even though we do perceive depth at such distances, the amount of deviation is minimal and the result is an image that doesn't look much different from a 2D image.

             IF the resolution of the taking lenses, capture media and viewing system is sufficient, 20ft nears will produce the same depth effect in the viewer as it does in the real world.  The deviation will make it to the retina.  When shooting ortho, my IDEAL near distances were about 12ft.   But I have shot many nears at 40ft, and the depth effect is still overwhelming.  to transfer deviation, there can be no weak links in the chain to degrade the deviation  before it projects onto the retina.  Taking lenses, taking media, viewing optics, etc.  



David Burder did some tests on how people perceive interaxial many years ago and found that when they were asked which pictures looked "natural", they invariably chose the ones with a wider than normal interaxial.

                  This is the opposite of what my tests revealed... of course, there is so many variables not mentioned here... for example, if its a city scape, with nears at 5 miles, of course, hyper will seem more appealing.  As always, the devil is in the details, so hard to throw out blanket statements like this.

I myself was surprised to find that some pictures I took that were shot at twice the normal interaxial did look full size.  Jacques Coté showed me some portraits he took for L'Oréal where the models looked larger than life and yet were shot with a 150mm interaxial and portrait lenses.  So the brain can easily be tricked.

           Yes, the brain can be tricked... "within limits" and everyone seems to have different thresholds of where these limits are.  This is what makes non ortho so  difficult for sharing images...again, for personal consumption, anything goes.  


What I find is that mostly purists in the 3D community are "annoyed" by seeing depth intensity where there should be none.  I haven't found anyone annoyed by it outside the community.  By the way, Pompey Mainardi - genius inventor of the Tri-Delta system - was also an avid fan of hyperstereo even though his own invention was designed with a 62.5mm "normal" interaxial !

              Its NOT about seeing depth, its about completely un natural views, such as,  why do those trees look 2 inches tall??  


It depends how extreme the hyper effect is and what the subject is.  I actually very seldom shoot with a very wide interaxial myself but definitely often shoot with between a 100 to 200mm lens separation.

              Again, its not just the base, its the near and far distances... McKay wrote a good book on this issue, he studied it for years.  I followed many of his formulas.... they were well thought out, and were quite math intensive.  In the end, I abandoned this technique, as the result was so hit or miss... 1/5 images were good, 4/5 seemed un natural, an un desirable. 
                But again, details matter.  when shooting a subject such as a bird on branch, 300ft away, with the sky as the ONLY background... hyper worked remarkably well.  But these are rare scenes, i.e. short depth of field with no far or infinity....in this case,  our brains do not have all the variables to distort the image.

Lucky for me, I don't need to depend on what other people claim from having at one time seen a single picture.  I have owned a LEEP viewer for over 35 years and have shot LEEP pictures for two weeks when the camera was loaned to me.  So I know exactly what this viewer can do.  And the effect is amazing.  The main two flaws are the low resolution due to the use of 400 ASA film (since the camera was fixed focus) and colour fringing due to low-cost uncorrected plastic lenses.

       We all agree, wide AFOV of viewing adds tremendous WOW effect.  The reason the LEEP system, or even current VR does not become more mainstream (IMO), is because the IQ is poor.  OUr basis of IQ is what we see with the unaided eye, LEEP, VR falls waaaaay short of our basis.  As we all know, the makers of these products are in the process of trying to advance the IQ of these systems...everyone knows the weak link is  IQ.  Great for gaming, but just OK for "fine art" viewing.   

> which supports my findings in trying to design and produce super WA viewing lenses.

There was a radical difference between Eric's approach and yours.  Eric started out with fisheye distorted images and then used fisheye viewing lenses to re-establish the geometry.  You start with corrected shooting lenses and it becomes a much bigger challenge then to try and design viewer optics that won't distort your images.

                 Not sure how you knew all the research I did??  I do remember sharing a "few" things with you, but certainly not all.   One avenue I spent over a year researching, is to alter the optics designs, to reduce the design criteria and let distortion go.  Optical software can perfectly graph the optical distortion pattern on an X-Y graph.  I then anti-distorted the captured images digitally to match the lens distortion.  I shot brick walls to run these tests.  The results demonstrated just how complex optics design and execution is.  Without boring this list to tears, the short story of the findings was as follows.  The "eye box", i.e. the area of viewing, whereas the image center, lens center and eye lens center are all concentric, this goal "in theory" can be achieved. However, the tolerance levels of the eye box, were so small, it would NEVER be practically to keep all 3 of these variables concentric in the real world.  While this anti distortion would work perfect on my optical bench with tremendous precision in the alignment... it only took 1mm of physical movement of one of the variables creating non concentric alignment, and distortion returned.  In addition, there was another distortion variable that never was discussed in 3d viewing which I discovered, I coined the term,  distortion rivalry.  This occurs when the two sides have a different form of distortion.  Now the brain must contend with a new form of rivalry (distortion variance in the two views) which the brain does NOT contend with compared to our unaided vision.  This is another source of tremendous viewing stress, all deteriorating the viewing experience.  

I had the benefits of having access the best optical design software, optical labs in the USA... Eric was doing his work long before these sophisticated tools were available.  BTW, even with the anti distortion system, the requirements for MTF I kept high, which still forced the use of high min. 5 element lens design using high end glass and coatings.  Also, the WIDE AFOV, produces optics of very wide diameter, assuming sufficient ER, which is mandatory to cover those who wear specs.  To attain these views, it would be impossible using one, or a few plastic elements.  

This is why  in previous posts, I mentioned, it will take a massive breakthrough in optical design to overcome these limitations.  To make a high resolution optic, with a super wide AFOV, that is light weight, small, etc, would defy physics as we know it today.  Hence why I am hoping for a VR with eye tracking to simplify the optical requirements.  Or even more ideal IMO, is a mid range viewer to rid optics completely, or the holy grail IMO, 3dtv 8K.  Seems we are soooo close ;)

My own goals were closer to yours in that I wanted to have wide angle camera lenses that did not have fisheye distortion.  That's what I didn't like about the LEEP camera.  Every picture was fisheye and could only be "decoded" in the LEEP viewer.  Back in the early nineties, I got involved in a project for a wide angle MF camera that had such lenses.  But then I discovered that trying to find appropriate orthostereoscopic viewing lenses for it was a nightmare - as you found out yourself.

           I did accomplish this to a degree, 60 deg AFOV (not the 90 deg holy grail) with breathtaking MTF, no distortion, no color fringing,  etc.  But again the optics were the size of your fist, and weighed almost 2lbs each, and would cost about $2k each.  


> Once they can produce higher resolution displays, such as 4k per eye


BTW, the Cinera uses two separate large rectangular displays 2.5k resolution each.  The result is impressive.

          can u imagine the jump to 5K... but again, optics IMO will always be the weak link in the chain with these close viewing systems, vs. a non optical viewing system, till eye tracking is introduced...or a means to keep the eye looking straight forward ONLY.


> This will allow low cost, and light weight lenses to be used with excellent IQ in the center portion only.


There is a LOT of research work going on at this moment and miniature displays (smaller than a penny) have been shown that have QHD resolution.  Several companies are now working on small high resolution VR "glasses" (as opposed to "headsets") So we are going to get there. (see attached)

               AGreed, even with reduced AFOV, a pocket viewer that you can insert a memory card, would be another holy grail... i.e. the holy grail for 3d portability ;)

As for the issue of my holy grail of viewing and taking system.  Yes, there is a few 180 deg taking systems on the market that are good... but as I mentioned " a complete taking and viewing system" is my holy grail... their no viewing system to utilize these captures at a level sufficient to the taking system.  The market is much more advanced on the capture side, as it can steal technology from 2d capture, not true for viewing, hence why it will always be the weak link for a complete system, till a big maker designs with a taking and viewing system mindset from the start.  


Re: A universal display application

timo@guildwood.net
 

I would be thrilled to get these improvements for my Freevi Commander.

Timo

On Jan 15, 2020, at 1:11 PM, John Clement <clement@...> wrote:

sView now can handle 2D and 3D intermixed, but there are still some glitches which Kirill is working out.
If you set the display input to source, it interprets the input according to the embedded metadata, but that is seldom there.
  1. Jpg files are by default considered 2D and are displayed appropriately
  2. Jps files are by default considered 3D
  3. Video files have some problems and we are working them out.
  4. It supports subtitles with some adjustability
  5. It uses the TriDef suffixes to interpret the source format
  6. MPO is automatically supported.
  7. Fuji AVI is automatically supported.
  8. It works on every device I have tried including Rokit, LG Thrill, Commander, King 7s,  Windows with LG fpr monitor.
  9. There will be an enhancement to allow intermixing slides and videos in a show.  It has the option of selecting videos, slides, or both in the trial version I have.
  10. The android version had 2 separate apps for video or slides, but they now are integrated and switch according to format.
 
We have been working on the Windows version, but once it is OK, an Android version will also be updated.  This comes close to being what is needed for 3D to become popular, however the options may still be too geeky for some folks.
 
There is no gallery function, but if you have a large number of folders with videos or slides a gallery can be more of a nuisance than a help.  I find a well ordered list of names to be quite adequate, but I also know how to use manual typewriters, card catalogs and phone books.  Have you seen any of these lately???  I find the TriDef gallery on Android to be a real nuisance and wish it had just a simple list.
 
Isaac Asimov was speaker at an RPI graduation.  During the introduction the answers he gave on his bio sheet were read.  To the question “Dr. Asimov to what do you attribute your prolific success as an author?”; he replied “To my ability to type 32 correct words per miute.”
 
John M. Clement
_,_._,_

Re: IPD vs Taking spacing. First true 3D camera announced for 2020

gl
 

On 13/01/2020 22:06, Bill Costa wrote:
But if somebody made a *really* good 3D camera and digital viewer
combination that achieved true ortho and only ortho, I'd
certainly buy it!  (Well, assuming I could afford it.)
I guess you'd want a camera that had 2 or more lenses, and could synthesize natural-looking IA's to suit the viewing conditions.  If that was all stored in the image metadata,  then digital viewer software could be written that automatically chose the correct ortha IA for any viewing situation.

Quite feasible technically, but unlikely to happen.
--
gl

A universal display application

John Clement
 

sView now can handle 2D and 3D intermixed, but there are still some glitches which Kirill is working out.

If you set the display input to source, it interprets the input according to the embedded metadata, but that is seldom there.

  1. Jpg files are by default considered 2D and are displayed appropriately
  2. Jps files are by default considered 3D
  3. Video files have some problems and we are working them out.
  4. It supports subtitles with some adjustability
  5. It uses the TriDef suffixes to interpret the source format
  6. MPO is automatically supported.
  7. Fuji AVI is automatically supported.
  8. It works on every device I have tried including Rokit, LG Thrill, Commander, King 7s,  Windows with LG fpr monitor.
  9. There will be an enhancement to allow intermixing slides and videos in a show.  It has the option of selecting videos, slides, or both in the trial version I have.
  10. The android version had 2 separate apps for video or slides, but they now are integrated and switch according to format.

 

We have been working on the Windows version, but once it is OK, an Android version will also be updated.  This comes close to being what is needed for 3D to become popular, however the options may still be too geeky for some folks.

 

There is no gallery function, but if you have a large number of folders with videos or slides a gallery can be more of a nuisance than a help.  I find a well ordered list of names to be quite adequate, but I also know how to use manual typewriters, card catalogs and phone books.  Have you seen any of these lately???  I find the TriDef gallery on Android to be a real nuisance and wish it had just a simple list.

 

Isaac Asimov was speaker at an RPI graduation.  During the introduction the answers he gave on his bio sheet were read.  To the question “Dr. Asimov to what do you attribute your prolific success as an author?”; he replied “To my ability to type 32 correct words per miute.”

 

John M. Clement

_,_._,_

VK World.

Harry Richards
 

I do not have a cell phone and have been thinking of using my VK World as a phone. The
problem is that it takes a long time to power up. Does anyone know if there is a way to
shorten the power up time?

Thanks,
Harry

I thought getting old would take longer
Harry Richards

Re: My Live stream from Australia

Olivier Cahen
 

Yes, but those who have no 3D Tv set cannot view these images.

Le 15 janv. 2020 à 13:10, Philip Heggie <philip_heggie@...> a écrit :

It’s a compromise between a loss of higher resolution vs keeping to 3dtv standards for 3dtv. All 3dtv’s decode it to full width 3d so why not buy a second hand 3dtv on facebook marketplace in your area I suggest LG Smart TV then you can watch youtube in 3d and 3d blurays with a game console like ps3
 
Sent from Mail for Windows 10
 
From: Olivier Cahen
Sent: Wednesday, 15 January 2020 10:10 PM
To: main@Photo-3d.groups.io
Subject: Re: [Photo-3d] My Live stream from Australia
 
Philip, why do you always post images in half width? It is very unconfortable.
 
Olivier


Le 15 janv. 2020 à 11:26, Philip Heggie <philip_heggie@...> a écrit :
 
 

 
<7FD1F15825B14C368456D9B517075CB7.png>

Re: Capturing birds in flight #3Dcomposition

 

They would make great 3D, but the workflow is already way
beyond my patience level.
Ditto!

But if you find the right situation, you can do something
similar with insects and a more manageable exposure.
Very cool image -- thanks for posting that.

...BC

Re: My Live stream from Australia

Philip Heggie
 

It’s a compromise between a loss of higher resolution vs keeping to 3dtv standards for 3dtv. All 3dtv’s decode it to full width 3d so why not buy a second hand 3dtv on facebook marketplace in your area I suggest LG Smart TV then you can watch youtube in 3d and 3d blurays with a game console like ps3

 

Sent from Mail for Windows 10

 

From: Olivier Cahen
Sent: Wednesday, 15 January 2020 10:10 PM
To: main@Photo-3d.groups.io
Subject: Re: [Photo-3d] My Live stream from Australia

 

Philip, why do you always post images in half width? It is very unconfortable.

 

Olivier



Le 15 janv. 2020 à 11:26, Philip Heggie <philip_heggie@...> a écrit :

 

 

 

Re: Capturing birds in flight #3Dcomposition

Bob Karambelas
 

They would make great 3D, but the workflow is already way beyond my patience level.

But if you find the right situation, you can do something similar with insects and a more manageable exposure.

http://phereo.com/image/538cd47ad475fec6610002a0

Re: My Live stream from Australia

Olivier Cahen
 

Philip, why do you always post images in half width? It is very unconfortable.

Olivier

Le 15 janv. 2020 à 11:26, Philip Heggie <philip_heggie@...> a écrit :


My Live stream from Australia

Philip Heggie
 


Bushfires Relief Tennis Australia 3D VR H-SBS
www.youtube.com

Re: IPD vs Taking spacing. First true 3D camera announced for 2020

depthcam
 

Looks like the attachments didn't make it.  Here they are again...

Francois

Re: IPD vs Taking spacing. First true 3D camera announced for 2020

depthcam
 

> Give a newb a MF viewer with ortho views, and the avg. person is freaked out by the realism of viewing a captured scene with life-like depth, that is the appeal.


For sure... IF the picture itself is appealing.  The problem I see and have seen for years has been for people to argue for orthostereoscopy and then take pictures of mostly flat scenes or where most of the subject matter is twenty feet or more away.  Even though we do perceive depth at such distances, the amount of deviation is minimal and the result is an image that doesn't look much different from a 2D image.


>  To the avg. person, depth is depth, it doesn't matter how it was attained, ie. what taking base, etc.


Exactly.


> But for the masses, when you try to trick what normal human vision can do, the viewing experience "can" become problematic.   Our brains only have ONE reference on how depth should appear.


David Burder did some tests on how people perceive interaxial many years ago and found that when they were asked which pictures looked "natural", they invariably chose the ones with a wider than normal interaxial.

I myself was surprised to find that some pictures I took that were shot at twice the normal interaxial did look full size.  Jacques Coté showed me some portraits he took for L'Oréal where the models looked larger than life and yet were shot with a 150mm interaxial and portrait lenses.  So the brain can easily be tricked.


> its accompanied by the annoying miniaturization effect.


What I find is that mostly purists in the 3D community are "annoyed" by seeing depth intensity where there should be none.  I haven't found anyone annoyed by it outside the community.  By the way, Pompey Mainardi - genius inventor of the Tri-Delta system - was also an avid fan of hyperstereo even though his own invention was designed with a 62.5mm "normal" interaxial !


> Many people I show wide base shots which miniaturize... their first response is, they break out laughing...


It depends how extreme the hyper effect is and what the subject is.  I actually very seldom shoot with a very wide interaxial myself but definitely often shoot with between a 100 to 200mm lens separation.


> the few people I know who viewed through the (LEEP) viewer stated the view problematic


Lucky for me, I don't need to depend on what other people claim from having at one time seen a single picture.  I have owned a LEEP viewer for over 35 years and have shot LEEP pictures for two weeks when the camera was loaned to me.  So I know exactly what this viewer can do.  And the effect is amazing.  The main two flaws are the low resolution due to the use of 400 ASA film (since the camera was fixed focus) and colour fringing due to low-cost uncorrected plastic lenses.


> which supports my findings in trying to design and produce super WA viewing lenses.


There was a radical difference between Eric's approach and yours.  Eric started out with fisheye distorted images and then used fisheye viewing lenses to re-establish the geometry.  You start with corrected shooting lenses and it becomes a much bigger challenge then to try and design viewer optics that won't distort your images.

My own goals were closer to yours in that I wanted to have wide angle camera lenses that did not have fisheye distortion.  That's what I didn't like about the LEEP camera.  Every picture was fisheye and could only be "decoded" in the LEEP viewer.  Back in the early nineties, I got involved in a project for a wide angle MF camera that had such lenses.  But then I discovered that trying to find appropriate orthostereoscopic viewing lenses for it was a nightmare - as you found out yourself.

Then I understood why Eric had taken that route.  He already knew that was the only way to have a wide angle orthostereoscopic system at a reasonable cost.


> Once they can produce higher resolution displays, such as 4k per eye


BTW, the Cinera uses two separate large rectangular displays 2.5k resolution each.  The result is impressive.


> This will allow low cost, and light weight lenses to be used with excellent IQ in the center portion only.


There is a LOT of research work going on at this moment and miniature displays (smaller than a penny) have been shown that have QHD resolution.  Several companies are now working on small high resolution VR "glasses" (as opposed to "headsets") So we are going to get there. (see attached)


> Its been my experience, without rules, problems begin to surface.


I fully agree.  Note that I wrote "a strict set of rules" - not "without rules".  What I mean here is that once one understands the mathematics of parallax (one of the first things I studied in the early eighties - I even wrote my own BASIC programs to calculate it), one can then play around with the variables while at the same time ensure that the results will be comfortable to view.


> In my dream world, ONE camera with super WA lenses, offering stills and video.


What you are describing is essentially a VR180 camera.  There are some fairly good ones out there.  But what I am waiting for is an 8K model.  It may be just around the corner.

Francois

Re: IPD vs Taking spacing. First true 3D camera announced for 2020

timo@guildwood.net
 

I saw that image too. I think it was at the nsa convention in Grand Rapids.
It was spectacular.  A superb experience.  I hope one day we will have the capability to make such images again.

Timo

Sent from BlueMail

On Jan 14, 2020, at 7:23 PM, Bill Costa <bill.costa@...> wrote:
Eric was shooting for the Holy grail, he was way ahead of his
time.  But super wide viewing lenses have tremendous
limitations.  the few people I know who viewed through the
viewer stated the view problematic ...

I got to see the LEEP camera and viewer in person and was able to
look at one slide in the LEEP viewer. As I recall the image was
of a rainy day in Boston taken from the interior of a car looking
up and out the windshield. You could see the dashboard, the
street scene, buildings and sky. The realism of the image was
evocative and captivating.

Holy grail indeed.

Unfortunately that was the only slide I ever got to see in this
viewer.

...BC