I have been following this topic, and thinking about it. I have tried sometimes to make stereo panoramas by stitching several photos using Hugin, then trying to stereo align them, with very poor results. The best I got was a few acceptable 2D panoramas (really very few:-). Never got an stereo panoramic worth keeping.
Thinking about it I believe there is a fundamental problem: Human vision CAN NOT process very large HFOV. 1 radian (the angle of a 36mm equivalent lens) is near the limit of what our vision understands (dragon-flies are probably much better:-)
IMO the way to view large field of view images is emulating what we do in real life: move the head (and the body) to watch every part of the scene around, but only part of it. This implies the image should be SPHERICAL, and the viewer device should convert the section of the sphere we are looking at into a flat image that is shown to our eyes. Perhaps using the VR features of smart phones, not sure whether this could be made to work.
For a stereo pair, two such spherical images would be needed. I liked the way of shooting explained by Melinda in a previous post: http://superliminal.com/misc/PanoramicStereo.png It would require more than one turn of the rig with the cameras looking upward to fill the upper parts of the sphere. Now we could take all the photos of the R and L cameras and stitch each of them to an spherical image. Perhaps Hugin would do it, but wait: we have to stitch the images of each side and AT THE SAME TIME ensure the L R spheres are aligned for stereo. That is, the matching points should be at the same elevation angle in the sphere. This looks feasible, more so because it is a batch task. What looks more difficult to me is how to quickly pan the spherical image in the viewer, because it requires continuous transforming from spherical to flat.