Multi-View 3D Reconstruction with Self-Organizing Maps on Event-Based Data

Resource type
Lea Steffen, Stefan Ulbrich, Arne Roennau, Rüdiger Dillmann
Book title
19th International Conference on Advanced Robotics (ICAR)
Depth perception is crucial for many applications including robotics, UAV and autonomous driving. The visual sense, as well as cameras, map the 3D world on a 2D representation, losing the dimension representing depth. A way to recover 3D information from 2D images is to record and join data from multiple viewpoints. In case of a stereo setup, 4D data is gained. Existing methods to recover 3D information are computationally expensive. We propose a new, more intuitive method to recover 3D objects out of event-based stereo data, by using a Self-Organizing Map to solve the correspondence problem and establish a structure similar to a voxel grid. Our approach, as it is also computationally expensive, copes with performance issues by massive parallelization. Furthermore, the relatively small voxel grid makes this a memory friendly solution. This technique is very powerful as it does not need any prior knowledge of extrinsic and intrinsic camera parameters. Instead, those parameters and also the lens distortion are learned implicitly. Not only do we not require a parallel camera setup, as many existing methods, we do not even need any information about the alignment at all. We evaluated our method in a qualitative analysis and finding image correspondences.
Online Sources
Download .bib
Download .bib
Published by
Lea Steffen