StereoPi v2 #3d-cameras


Michael Levine
 

Just wondered what people think of this project. 
Here is the project page
https://www.crowdsupply.com/stereopi/stereopi-v2
and a photo with the housing


David Sykes
 

When they released the earlier version I asked them about sync accuracy but never received a satisfactory reply.
With this new version I cannot view the videos on my (XP !) PC so I will view on iPad Mini later.


JackDesBwa|3D
 

Just wondered what people think of this project.

I do have the first version.
I used it to build a "stereomaton" with other people at the HAUM hackerspace. This project is an itinerant self-service stereoscopic photo automaton.
In short, there is a touchscreen on our built camera with a code displayed on it. When we touch the button, a countdown appears at the end of which a photo is taken and assembled in the machine (color balance enhancement + alignment, lens distortion correction and window placement pre-computed). When the camera is in the range of our wifi access point, it uploads its photos to a computer. At the festival were we presented it, some photos were marked as public and were displayed automatically on our booth. We showed them with different viewing methods (anaglyph, autostereosopic screen, google cardboard stereoscope...) The private photos were available on a website for the people to download their 3D photos with the code that was displayed on the screen (and on our booth with the same code).


It is not a good tool for high-end photos, at least with the cameras I used (was 5MP raspberri-pi camera v1)
But it is good for applications where we want to play with the images interactively like in the project I talked about, and several projects that people did (stereo cameras on robots, first person vision on drones, telepresence robot with 3D vision, 2D 360° panoramas, live streaming stereoscopic videos, acquire multiple viewpoints of a scene, pair two of them for creating animated Nimslo-like photographs...) It is also good for "small" [compared to assembled commercial cameras] stereo base (down to circa 1 inch safely, perhaps a bit less)
I did not play much with the video aspect, but in the tests I did, the result was a bit blocky (on fast moving scene). Perhaps it can be better with some tweaking. Also, the video encoder is limited to about 1920×1080 definition, thus you end up with squeezed 3D file if you want to record it.

I did not study what is new in the v2 version, but it will be quite similar with the upgrades of the raspberry-pi v4 compute module (nothing new with the video encoder in this module if I understand well) and probably some other minor changes for better hackability.

When they released the earlier version I asked them about sync accuracy but never received a satisfactory reply.

The sync is as good as we could expect for such architecture.
I did not play a lot with it, but I recorded (not kept the video) a water tank kicked so that there were violent waves in it with jumping water, and it was perfectly viewable. Checking frame by frame, we can see the error in synchronization on the free falling droplets, but it was rather small (smaller than the displacement from one frame to the other). I also remember a running chicken flapping its wings fast to go quicker where only the tip of the wings was not in sync in only one of the shots. As expected, it is good sync for most scenes.
For more accurate sync, you can modify the cameras modules to run them on the same oscillator, and some cameras (such as raspberry pi camera v2, or at least its sensor) have sync pins that you can probably use, though I do not know how it works exactly.

JackDesBwa


David Sykes
 

Thanks for that useful information.

The sync is as good as we could expect for such architecture.
> it was perfectly viewable.
> As expected, it is good sync for most scenes.

Hmmm ........... as with covid ...... we really need hard technical numbers.
They should publish photos and videos of high-speed action including the inevitable waterfalls,hosepipes or ornamental fountains.
But really, it is not for hobby stereo photography.

For more accurate sync, you can modify the cameras modules to run them on the same oscillator, and some cameras (such as raspberry pi camera v2, or at least its sensor) have sync pins that you can probably use,
Would be very interesting to know how that is done.

though I do not know how it works exactly.
No one ever does :-)


JackDesBwa|3D
 

we really need hard technical numbers.

I did not measure it as it was far faster than what I needed for the application.
Also, I gave examples from memory.

They should publish photos and videos of high-speed action including the inevitable waterfalls,hosepipes or ornamental fountains.

I was very surprised to find the video I shot last year, I thought I did not keep it.
So I uploaded it here: https://youtu.be/Vk8Fhci0IvA
 
Would be very interesting to know how that is done.
No one ever does  :-)

For the oscillator trick, there are multiple tutorials online.
For the genlock pin, I have no information. Classical raspberri-pi v2 cameras do have the pin, but it is not exposed apparently, which limits the probability to have someone who played with it.

JackDesBwa


David Sykes
 

So I uploaded it here: https://youtu.be/Vk8Fhci0IvA
Thanks.
Very useful to see that.
You have other interesting tests there.
Don't know what that flowchart represents but it looks complicated.
(Your apple tree reminds me I had better get a move on and dehydrate my remaining apples before they are too soft)

For the oscillator trick, there are multiple tutorials online.
Cannot find any, do you have any links ?

Classical raspberri-pi v2 cameras do have the pin, but it is not exposed
Do you mean by software or physically ?


JackDesBwa|3D
 

You have other interesting tests there.
Don't know what that flowchart represents but it looks complicated.

They are the chain of operators used with ffmpeg in its filter_complex argument to make the video.
I used a script to create the graph (and the associated command) because it becomes hard to follow what you are doing in the command line when the graph gets bigger.
I did not spent much time with stereoscopic videos yet, so the channel is very limited.

> For the oscillator trick, there are multiple tutorials online.
Cannot find any, do you have any links ?

In the V2 camera, the clock is generated by an autonomous oscillator (quartz with its exciting system in a single chip) linked to the camera chip through a resistor, so you can remove one resistor to isolate its oscillator and feed the signal of the other oscillator with a wire (preferably with a twisted pair of wires with GND on the second wire to reduce interference). On V1, there is no intermediate resistor, so you should remove the oscillator chip itself.

Notice that in this page they also remove (on one camera) the I2C EEPROM that raspberry added (to force the customers to buy their products avoiding competitors to propose cheap cameras as they did with the OV5647 sensor), which is not necessary for the sync of oscillator.

>  Classical raspberri-pi v2 cameras do have the pin, but it is not exposed
Do you mean by software or physically ?

Physically. I read that the signal does not even come to the little PCB on which the camera is glued.
I did not check the info though: there are 7 unconnected pins on the connector that are not documented in the schematic of the board, and it might be one of those.

JackDesBwa


David Sykes
 

On Sat, Feb 27, 2021 at 10:03 PM, Stereopix Net wrote:

Thanks for that interesting information.
I do know that site but missed that.

with a twisted pair of wires
Surprised the sharp-edged oscillator signal was not degraded over that distance ..... or maybe it was as they discontinued that item.
So, is the I2C signal broadcast simultaneously to more than one camera or do you have to send to each camera in turn ?


JackDesBwa|3D
 

Surprised the sharp-edged oscillator signal was not degraded over that distance

I don't remember having measured a signal transmitted like that, but I won't expect it to be completely deformed, especially with a twisted ground wire coming with it.
Anyway, all the harmonics of the signal are not necessary, as the chips generally have an input stage that would reconstruct it, so it is not a problem if the signal is a little degraded (not too much though, and it depends how it is affected).

  ..... or maybe it was as they discontinued that item.

It seems to be still sold in some of their kits.
 
So, is the I2C signal broadcast simultaneously  to more than one camera or do you have to send to each camera in turn ?

I²C is a bus of communication. This means that there can be multiple devices on it.
For that, there is some minimalist hardware parts (inside chips) and moreover a protocol which says which one is allowed to speak on the wires.
This protocol uses addresses to determine it (orchestrated by a master), thus there should not be two devices with the same address on the bus. For example, the EEPROM and the camera modules are connected to it.
But, their addresses are not modifiable, so there must be two buses to use two cameras.

The CAMArray boards of Arducam apparently have a CPLD which likely decode the I²C signal coming from the raspberry and transmit it to each camera separately (and perhaps does some treatment in between), making the raspberry think there is only one camera. The CPLD also probably get the image from the CSI lines and mix the images to return only one CSI signal to the raspberry.

I do not know the product, but it is what I can infer.

JackDesBwa


JackDesBwa|3D
 

Would be very interesting to know how that is done.

To follow up, I just discovered that the raspberry-pi HQ cameras have the synchronization pin available, but also needs soldering.
I only read the first post that presents the implementation in raspberry's software and how to use it: https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=281913
I do not have those cameras to test it.

JackDesBwa