https://keystonedepth.cs.washington.edu/ #3Dcomposition


Ian Walker
 

With my apologies if this has already been discussed but I thought it might be of interest to many here:

KeystoneDepth: History in 3D (washington.edu)


Wright Carlton
 

I find the short videos amaizing, the bringing in the z axis as well (ie 3 degrees of freedom).  I could not see more than the examples on the first page.  Two questions, has anyone found the right keyword to search for other video's with z axis (ie in and out).  Second question, anyone know how to create such a video using their image and Disparity Information Map?

Thanks

Carlton

On Thursday, March 18, 2021, 10:03:30 PM GMT+11, Ian Walker <rd3d2@...> wrote:


With my apologies if this has already been discussed but I thought it might be of interest to many here:

KeystoneDepth: History in 3D (washington.edu)


Etienne Monneret (Perso)
 

Le 22/03/2021 à 03:59, Wright Carlton via groups.io a écrit :
Second question, anyone know how to create such a video using their image and Disparity Information Map?

The best candidates for this are certainly Depth Map Viewer and Depth Map Player, using a 2D picture and a depth map to build a 3D model:

http://www.3denlive.com/doc/wiki/index.php?title=3D-stereo_softwares#Depth_Map_Player

You may also experiment 3Denlive's stereo depth map tool, that should enable you to zoom in/out, change de X viewing position, change the perspective effect, but not the Y viewing position. See the last animation on that page as an example:

http://3denlive.com/demos/en.php

If someone knows about an other software to do such a thing, I may add it to my software directory page.

;-)

Etienne


Wright Carlton
 

Thanks Etienne,

I like the viewer.  I have created 3D model using my own stereo pair and creating with StereoPhotoMaker (with multiple frames) to create a reasonable video (or GIF) on my way to a lenticular print  But rhe linking to z axis (in/ou) as well as up/down and side to side is what impresses me with the Keystonedepth approach -  makes it look more natural.  I gues similar to VR 3 degrees of freedom, so I suspect is is where the answer lies.

Carlton

On Monday, March 22, 2021, 05:22:06 PM GMT+11, Etienne Monneret (Perso) <ab3d@...> wrote:


Le 22/03/2021 à 03:59, Wright Carlton via groups.io a écrit :
> Second question, anyone know how to create such a video using their
> image and Disparity Information Map?


The best candidates for this are certainly Depth Map Viewer and Depth
Map Player, using a 2D picture and a depth map to build a 3D model:

http://www.3denlive.com/doc/wiki/index.php?title=3D-stereo_softwares#Depth_Map_Player

You may also experiment 3Denlive's stereo depth map tool, that should
enable you to zoom in/out, change de X viewing position, change the
perspective effect, but not the Y viewing position. See the last
animation on that page as an example:

http://3denlive.com/demos/en.php

If someone knows about an other software to do such a thing, I may add
it to my software directory page.

;-)

Etienne









Etienne Monneret (Perso)
 

Le 22/03/2021 à 09:22, Wright Carlton via groups.io a écrit :
I like the viewer.  I have created 3D model using my own stereo pair and creating with StereoPhotoMaker (with multiple frames) to create a reasonable video (or GIF) on my way to a lenticular print  But rhe linking to z axis (in/ou) as well as up/down and side to side is what impresses me with the Keystonedepth approach -  makes it look more natural.  I gues similar to VR 3 degrees of freedom, so I suspect is is where the answer lies.
On the principle, once you have the 3D model, you should be able to build whatever movement you want.

Again: with 3De you should be able to build X+Z movements with perspective effects, but Y movements are still not possible (with vertical perspective adjustments).

Etienne


Wright Carlton
 

Thanks Etienne, I will investigate more.  I looked at the last example but in my quick look did not see how to do it.  As a side note, this is possibly Google's way of countering Facebook3D which I also have created images for ... the mobile phone version has the nice feel of swiveling the phone to see 'around' the image.  Early in the Samsung VR world there was a wonderful app called 'Window to the World' which allowed the natural feel of looking through a window, and by moving side to side, or up/down seeing more of the image ... ideally this is what I'd like to see.  The Keystionedepth misses a key 3D requirement of having the image with a clean window, and the image to (at least on the edges) not comming out in front of the window.

Carlton

On Monday, March 22, 2021, 07:51:48 PM GMT+11, Etienne Monneret (Perso) <ab3d@...> wrote:


Le 22/03/2021 à 09:22, Wright Carlton via groups.io a écrit :
> I like the viewer.  I have created 3D model using my own stereo pair
> and creating with StereoPhotoMaker (with multiple frames) to create a
> reasonable video (or GIF) on my way to a lenticular print  But rhe
> linking to z axis (in/ou) as well as up/down and side to side is what
> impresses me with the Keystonedepth approach -  makes it look more
> natural.  I gues similar to VR 3 degrees of freedom, so I suspect is
> is where the answer lies.
>
On the principle, once you have the 3D model, you should be able to
build whatever movement you want.

Again: with 3De you should be able to build X+Z movements with
perspective effects, but Y movements are still not possible (with
vertical perspective adjustments).

Etienne








Etienne Monneret (Perso)
 

Le 22/03/2021 à 10:01, Wright Carlton via groups.io a écrit :
I looked at the last example but in my quick look did not see how to do it.
Not sure to understand what you mean with "did not see how to do it".

Perhaps you may install 3De, download the demo pack on the demo page, and load it into the software to see how it works.

You may use a 2D image with a 2D depthmap (demo just before the last one), or use a stereo-image with a stereo depthmap (last demo).

If you need some explanations, just ask me.

;-)


Etienne Monneret (Perso)
 

Le 22/03/2021 à 10:01, Wright Carlton via groups.io a écrit :
I looked at the last example but in my quick look did not see how to do it.
You may send me a picture and a depth map, 2D or stereo, I will try to build you an example.

;-)


Bill Costa as just a member
 

For folks who are still following this thread, I recommend looking at the video explaining this project if you have not already done so.


...BC


On Mon, Mar 22, 2021 at 8:25 AM Etienne Monneret (Perso) <ab3d@...> wrote:
Le 22/03/2021 à 10:01, Wright Carlton via groups.io a écrit :
> I looked at the last example but in my quick look did not see how to
> do it.

You may send me a picture and a depth map, 2D or stereo, I will try to
build you an example.

;-)









--
Bill.Costa@...
+1.603.435.8526
https://mypages.unh.edu/wfc
No good deed goes unpunished.


Etienne Monneret (Perso)
 

Le 22/03/2021 à 13:33, Bill Costa as just a member a écrit :
For folks who are still following this thread, I recommend looking at the video explaining this project if you have not already done so.
https://www.youtube.com/watch?v=_AFnmkC_6OQ <https://www.youtube.com/watch?v=_AFnmkC_6OQ>
Doing some tests with their provided left/right images and depth maps, I find that the hard thing is to deal with the holes. Of course, it's especially true when there is a large gap between the front and back parts. Reading the provided paper, at a first reading their "double projection" is really not clear to me.

:(


Etienne Monneret (Perso)
 
Edited

Le 22/03/2021 à 14:55, Etienne Monneret (Perso) a écrit :
Doing some tests with their provided left/right images and depth maps, I find that the hard thing is to deal with the holes. Of course, it's especially true when there is a large gap between the front and back parts. Reading the provided paper, at a first reading their "double projection" is really not clear to me.

Got it!

If I'm right, it's quite simple:

  1. using a picture P1 with its depth map, they are building a change in the point of view of P1, providing with a P2 picture. This P2 picture shows holes where there is something not visible on picture P1. They also compute a new depth map being the P2 depth map, with the same transformation.
  2. they are then using P2 with its depth map to build a picture P3 from the original point of view of image P1. In fact, if all would be known, picture P3 should be identical to picture P1. But, in this case, there are new holes in P3 where something is not visible on picture P2 (hidden parts in the P1 to P2 transformation).
  3. comparing P1 and P3, they now have the information on what should be put in the P3 holes. This enables them to build a large set of picture pairs, where one of the image contains holes, and the other one is providing the completed picture with holes filled.
  4. they are using this large set of picture pairs to train a neural network learning to fill holes of a picture. 
Of course, it's not perfect. This page shows some fails:

http://xuanspace.cs.washington.edu/3DV_supplementary/pages/big_artifacts.html

I don't know if the "hole filling" toolkit is available somewhere....

:)

 


Gordon Au
 

On Sun, Mar 21, 2021 at 10:59 PM, Wright Carlton wrote:
Second question, anyone know how to create such a video using their image and Disparity Information Map?
I believe the project authors made their demo animations using 3D-Photo-Inpainting, an AI program (http://github.com/vt-vl-lab/3d-photo-inpainting), and I'd say it's a great pairing. The THIRD and last video on this page is an example of using that with the KeystoneDepth data: http://worldofdepth.com/daily/210201.html That was made from a horizontal panning video of a single image from the stereocard.

The video before that shows the result of combining videos made from each eye view—there are binocular discrepancies because of how the AI inpaints the left and right views differently.

By the way, the disparity maps from this project are in the Magma colormap palette. Simply desaturating will give you an acceptable grayscale depth map, but if you have the tools, there are ways to convert it more precisely using CLUTs (color look-up tables).

Also FYI, anyone can use the 3D-Photo-Inpainting AI for free and without an advanced computer via Google Colab: http://github.com/vt-vl-lab/3d-photo-inpainting. It takes some getting used, but winds up pretty easy to use IMO (I think there there are YT tutorials). I may do a workshop on using it for a future NYSA or other 3D club meeting/conference.

- Gordon


John Toeppen
 

If one seeks to produce 3D models they typically use dozens or thousands of images.  One can use some software and viewing systems to view the models in stereo.  It is curious how stereo and 3D are not joined at the hip, but they are not.  Stereo digital photography, stereo videos, stitched panoramas, stereoscopic panoramas, and solid models have been been fun and links to examples are on my home page; http://holographics3d.com/

I have posted 1100 models on Sketchfab because it is currently the best way to share and to even for me to view my models easily, even to offer them for sale.   https://sketchfab.com/toeppen   

You can try the software for free and learn about point clouds, meshes, and textures.   Shooting objects from all sides to triangulate on every point is a challenge, and the software has a great deal of work to do, and your video card will earn its keep and heat your room.  Learning how to do this without missing this is difficult, but the medium is a blast.   https://www.agisoft.com/

If you are getting paid to shoot or other wise willing to pay for powerful pro software I like this ;   https://www.capturingreality.com/   This software is complex and the names for some functions do not make sense, but boy does it go.  The PPI method of payment is about $5 on up for a model, and you only pay if you like the results.

One of my son's friends and I started e-mailing about this topic some time back and formed a FB group now having >25,000 members.   The game. photography, and movie people converged on the sight.   I am a tourist compared to these pros, but I like being a tourist.  I need to be a tourist with a drone next.   (12) 3D Scanning Users Group | Facebook 

I also started a group for (12) Scanned 3D World | Facebook    
that is more about what we scan and why than how.   I think that this as one way to share a powerful and amusing set of tools. People in this group have shared quite a bit with me since way back when people gave me hell for not having allegiance to film cameras.  Image processing and displays have improved quite a bit but still have a way to go for stereo, but we can help.

John Toeppen 





Bill Costa as just a member
 

I want to thank Ian for bringing this wonderful resource to our attention.  I've spent a couple of days reading the materials and downloading the images.  This thread has explored a number of topics, but I want to take things in a different direction, so I will start a new thread about my goals and efforts to make these images more widely available for 'real' 3D viewing.  My plan is to submit a series of short posts over the next few weeks, looking for opinions and feedback on specific topics.  

Stay tuned.

--
No good deed goes unpunished.


Etienne Monneret (Perso)
 

Le 22/03/2021 à 19:27, Gordon Au a écrit :
On Sun, Mar 21, 2021 at 10:59 PM, Wright Carlton wrote:
Second question, anyone know how to create such a video using their image and Disparity Information Map?
I believe the project authors made their demo animations using 3D-Photo-Inpainting, an AI program (http://github.com/vt-vl-lab/3d-photo-inpainting), and I'd say it's a great pairing.

I wonder if 3D-Photo-Inpainting and the Keystonedepth project of this topic are really the same method... reading the papers available on each web site the explanations seems very different. At this state of my understanding of them, I'm not sure about the fact there have something in common or not.





Gordon Au
 

On Tue, Mar 23, 2021 at 09:00 AM, Etienne Monneret (Perso) wrote:
I wonder if 3D-Photo-Inpainting and the Keystonedepth project of this topic are really the same method... reading the papers available on each web site the explanations seems very different. At this state of my understanding of them, I'm not sure about the fact there have something in common or not.
I believe they're totally independent research projects and teams, but they're aware of each other's work. The lead researcher for KeystoneDepth recommends and has herself used 3PI to make animations from KeystoneDepth project data. (I've communicated with her privately.)