Re: Stereoscopic panorama of Mars !
I think I see an obviously Atlantean pyramid shaped rock, centred at about 28 seconds in.
Funny that it looks better in the browser than in the dedicated app. :⋅D
The band is about 25°×360° (the relation is not straightforward, but if I got it right, about 22% of the whole sphere surface); it is like a panoramic window.
Anyway, there are not enough raw material to have a full sphere, and even if there were, the robot would be too close for comfortable viewing.
If you are very motivated, there are some parts on the same location (taken on SOL3 for example) where you could extend the band toward the bottom, but managing depth at the seams will be harder (probably require more advanced technique than doing as if they do not exist and accept blurry patches)
Yes. But you can download it (in Linux, I would advise youtube-dl or you-get command line tools) and watch it with sView (View > Panorama > Sphere via the menus if needed) or another player which is aware of both equirectangular projection and 3D.
The stereoscopic window was drawn with Inkscape, exported in transparent PNG and placed over the panning video with FFmpeg (in the same command that created the panning).
It took place by small touches over about two weeks with a lot of failures (and crashes mainly due to lack of RAM), so I will not remember all in details.
But basically, after Gordon talked about panoramic shots, I found (visually, you did not published your spreadsheet at the time) three panoramas on SOL 3, 4 and 11.
I played with the panorama of SOL3 in 2D before, then I knew that working with so much images was difficult, and was aware that the close rover would also have been a tremendous challenge, so I preferred to limit myself to the photos from SOL11 for a 3D version.
I first imported all the images in Darktable and developed the photo of the calibration target, mainly a crop to remove the black borders and color balance. I copied the operations on this photo to all other and exported.
Then I added these new photos to Hugin. Having a VR export in mind, I let the equirectangular projection (which is the default); but I also projected the panorama in cylindrical projection later for the panning version.
I disabled all images and enabled only the images of the first row of the left image.
With a few images selected at turn (to avoid quadratic time explosion), I asked Hugin to find matching points. When all images had their correspondences, I reviewed the found points and removed a few aberrant ones and launched the computation. With the previous panorama, I had to help Hugin with rough camera angles, but with this one, it worked well by itself. The optimized parameters were pitch, roll, yaw, camera field of view and lens deformation (all together at the end, but sometimes only a subset was checked depending on the errors I saw [for which I added control points], to help the optimizer descend the right gradient and reduce computation times).
I did the same for the other rings, then I added some points to assemble the top and bottom rings of each view. I ran the optimizer so many times :-)
The magic step for stereo is to add matching points constrained only in the horizontal direction. I added a bunch of them between the left and right images, but not systematically on all images.
Control points: (blue for horizontal matching, green to red [when error is large] for exact matching)
The remaining rotation between the left and right bands was locked with a single exact matching point between two images of one pair, which also obviously defines the position of the zero parallax point.
I also launched a photometric optimization which was soooooooooooooooooooo loooooooooooooooooooong to help reducing the visual impact of the difference of exposure. I think Hugin uses only one CPU in this process.
To export, I disabled all right images and launched the stitching; and the way around for the other eye. Of course, since the alignment was well constrained with both eyes at the same time, there is no need to further align the result [it is the way I align my own stereo photographs, as a panorama of one image per eye and rectilinear projection]
The next step was to cut the top and bottom of each view to have a perfectly straight window edge (I saw after publication that I missed a small part on the top)
I also drew some panels to add context. I initially wanted to place them in space with Blender, but I realized that Hugin could do it too (and probably faster) for such simple case.
I got some sounds from the SuperCam microphone and played with them to have an audio track for the video (timed based on the panning one)
Finally, I merged the photo, the sound track and the overlay in a video with FFmpeg.
The spreadsheet also shows large differences in brightness of the images, even when they have apparently the same filters. Perhaps because the photos are taken at different times of day. However the video looks very well equalized.
For the photos of the rover Curiosity, I read that they scaled the exposures. This is probably the case here too, because when you look at the histogram, it looks more like the one of a processed image than like a photo out of a sensor.
The equalization was done by Hugin with the extremely long process I talked about before. If you look closely, the average exposure changes smoothly around the ring. The vignetting is not totally gone neither.
But the resolution of the video looks somewhat lower than the equivalent raw photos.
Yes. It was exported as a 5760×5760 video (also known as 5.7K).
The full export would probably be 20878×20878, but not sure I would have enough resources to process it.
Is it possible to view the VR with some Android app?
The trick with youtube-dl and sView would probably work.