A while ago, I started working on a dataset I captured a few years ago with a Microsoft Kinect One.
I immediately realized the data looked much cleaner than the newer datasets I created with my Intel RealSense D435.
I had already noticed that, above a certain distance, the depth data was full of craters. I already knew the error is proportional to the squared distance, but for me, it was much bigger than expected. Therefore, I calibrated the sensors and now I stay closer to my targets during the acquisitions.
But for the last dataset I captured, I tried another strategy: I decided to save also the raw IR footage to process it offline.
Stereo vision
RealSense cameras are RGBD sensors: they provide simultaneously a color (RGB) and depth (D) stream.
There are several types of techniques to measure depth. For example, the original Kinect for the Xbox 360 uses “structured light”, and the Kinect One included a time-of-flight camera.
The RealSense D400 series is based on stereo vision, which works by matching the same point in frames captured by two different cameras. There is a relation between the displacement of this point (disparity), the relative position of the two cameras, and the depth. … [Leggi il resto]