Re: why 3d is no longer popular - article


"Any on-screen specification is meaningless without also specifying the viewing distance to the screen.
In absolute yes, but in this case it can be omitted because the depth behind and before the screen grow proportionally with the distance of the observer to the screen (so the smaller one stays the smaller one at all viewing distances)."   They do not grow proportionally.
Illustrated with numbers: 65mm IPD, two sample points with respectively 50mm positive disparity, and -100mm negative disparity [slightly shorter than what the book mentioned earlier advised as maxima].
If you do not like this example, take other numbers and you will see the same relations if you stay in valid conditions.
Viewing at 5m, the two perceived distances are respectively 16.66m behind the screen and 3.03m before the screen.
Viewing at 10m, the two perceived distances are respectively 33.33m behind the screen and 6.06m before the screen.
Viewing at 15m, the two perceived distances are respectively 50.00m behind the screen and 9.09m before the screen.
When you double the distance, the perceived depth relative to the screen doubles, and when you triples it triples (and so on), which is what I would qualify as proportional.

If you are more visual, this is a top-view image, with 5m and 10m viewing distance, stacked on the same graphic (axes do not have the same scale for a better readability of the lines, but are both labeled in mm).
You clearly see that the eyes (the two pairs of points separated by 65mm at the bottom) half-way closer to the screen (represented by the X axis with homologous points on it) cross half-way behind and half-way before compared to the other pair of eyes seeing the same homologous points.

In all cases (because of the proportionality) the percentage of the scene displayed behind the screen is the same (here ~84.6% behind, which is more than half).
You can have negative space artificially bigger in proportion (more than half of the volume) by limiting even more the maximum disparity on screen, but I would not say that the range of available depth is bigger in front of the screen only because of this kind of decided limitation. Also, using bigger negative disparity would not lower this proportion much (still over 50% behind).

Depending upon your viewing distance, the image will appear more squashed or stretched along the Z axis.

Of course. The Z range changes proportionally to the distance of the eyes to the screen, but XY are fixed, thus it results in a squash/stretch effect. Well, it is probably a bit more complex as the same XY size at a further distance might have other consequences, but I guess that the change in Z dominates anyway.
But this does not change the fact that if you want to use all the space reasonably available, there is more room behind than before the screen, which was the point discussed. How you set your camera so that the proportions are good enough in this area is another topic.

 "If you limit the size displayed behind the screen so that the maximum disparity is 50mm on the screen, you still have more space behind than in front of the screen."  No way.  If the maximum disparity equals infinity, that will be very squashed, with infinity appearing to be only a meter or so (depending upon your viewing distance) behind the screen.  With most theaters, even the front row (the closest viewing distance) usually exceeds 5 meters.  At all other locations in the auditorium, your viewing distance to the screen will be much more than that.

Well, if you compute that it would look like it is only one meter or so behind the screen, I understand why you talked about a bigger size in front of it. But unless I am completely wrong in theory and observations, I think you are wrong about this point. The distance behind the screen for such disparity is perceived (with the numbers of the above example) 3⅓ times the viewing distance (and farther for children). Thus for "all other locations in the auditorium", the people will see the infinity "much more than" 16m away behind the screen (so "much more than" 21m away from them) which might be reasonably far away quite rapidly to make a convincing infinity, moreover if the audience is caught by the story (and at least a reasonable precaution to avoid pain for people with small IPD as J.M.H. repeats).

"wiggle stereo" is clearly an example of showing motion parallax (False!)
"Sure. The two images are taken from different viewpoints so there is parallax visible in the animation."  Not true. The true parallax that was exhibited simultaneously in the original stereo pair was lost when the stereoscopic information was eliminated to convert it to an animation.

We might again have a different definition of the term. For me, the parallax is a difference in the apparent position of an object viewed from different points of view (due to the change of viewpoint itself).
Although the stereopsis uses it, it is not linked to it. For example a parallax in the vertical direction can be captured, but does not create a proper stereoscopic image.
Since this kind of presentation uses multiple viewpoints, it has a parallax encoded in the images. The fact that they are not shown simultaneously does not change the presence of differences due to the point of view.

That said, I agree that it is not a nice way to display images that were captured for stereoscopic viewing, but anaglyph is also a poor way to view color stereoscopic images that some people like to use anyway.

One eye can be enough to see in 3D under the right conditions.

"Structure from motion" is a technique that uses one single moving camera to construct a 3D model of a scene (few 3D points to align VFX with real motion of the camera, or even build complete models that can be rendered with a 3D software, including stereoscopically). This is just another point in favor of this statement.

but still is not in 3D.  You need to have the use of both eyes for stereopsis.

The sense of three dimensions is not limited to stereopsis, and I do not talk about monoscopic cues here (which I understand as the cues that make depth be inferred from a unique photo/point of view: shadows, occlusion & co).

For example, if you have two punctual lights in the dark, you have no monoscopic cues to distinguish their depth position. If you use one eye, as soon as you move your head (provided that the distances are in the right range; it is also true when it is too far for stereopsis and with a bigger movement), if your mental model is rich enough, the motion parallax can tell you where they are located in depth, because you know (in a simplified manner) that farther objects will be less affected by your movement. Also it is not a fuzzy in front/behind relation, but true depth sense as the fundamental model of the world involved is just the same as stereopsis and retrieve as much information (except that stereopsis is a simplified version for faster processing at small range).

Wiggles might use similar mental process to perceive depth, although I must admit that very few wiggles let me really sense the depth (but one or two two-views did, and some multi-views too). But I imagine that I did not see the work of someone who master this form (I mean someone who intends to use this form of presentation, not a conversion from stereo).


Join to automatically receive all group messages.