M.Sc. Lea Steffen
Werdegang
Lea Steffen studierte von 2010 bis 2014 Medizinische Informatik an der Hochschule Mannheim. Im Anschluss belegte sie den Studiengang Informatik Master am Karlsruher Institut für Technologie (KIT) von 2014 bis 2017. Die Schwerpunkte ihres Masterstudiums lagen dabei in den Gebieten Kognitive Systeme sowie Robotik und Automation. Ihre Masterarbeit "Multimodal Motion Activation for Robot Control using Spiking Neurons" fertigte sie am FZI Forschungszentrum Informatik in der Abteilung Technisch Kognitive Assistenzsysteme (TKS) an. Seit August 2017 ist sie Wissenschaftliche Mitarbeiterin in der Abteilung Interaktive Diagnose- und Servicesysteme (IDS) im Forschungsbereich Intelligent Systems and Product Engineering (ISPE) des FZI.
Publikationen
Zeitungs- oder Zeitschriftenartikel (1)
- Neuromorphic Stereo Vision: A Survey of Bio-Inspired Sensors and AlgorithmsInfoDetails
Lea Steffen, Daniel Reichard, Jakob Weinland, Jacques Kaiser, Arne Roennau, Rüdiger Dillmann, 2019
Any visual sensor, whether artificial or biological, maps the 3D-world on a 2D-representation. The missing dimension is depth and most species use stereo vision to recover it. Stereo vision implies multiple perspectives and matching, hence it obtains depth from a pair of images. Algorithms for stereo vision are also used prosperously in robotics. Although, biological systems seem to compute disparities effortless, artificial methods suffer from high energy demands and latency. The crucial part is the correspondence problem; finding the matching points of two images. The development of event-based cameras, inspired by the retina, enables the exploitation of an additional physical constraint – time. Due to their asynchronous course of operation, considering the precise occurrence of spikes, Spiking Neural Networks take advantage of this constraint. In this work, we investigate sensors and algorithms for event-based stereo vision leading to more biologically plausible robots. Hereby, we focus mainly on binocular stereovision.
Konferenzbeitrag (6)
- Creating an Obstacle Memory Through Event-Based Stereo Vision and Robotic ProprioceptionInfoDetails
Lea Steffen, Benedict Hauck, Jacques Kaiser, Jakob Weinland, Stefan Ulbrich, Daniel Reichard, Arne Roennau, Rüdiger Dillmann, 2019
To guarantee safety in a shared work space between humans and robots, flexible robotic motion control is required. Unfortunately, path planning algorithms for complex robotic systems are too computationally expensive to enable a real-time solution on conventional hardware. With the long-term goal of performing a reactive path planning algorithm, we apply neuromorphic sensors and Spiking Neural Networks to create an obstacle memory of a robot’s work space. We create a neuron population representing all objects of the robot’s work cell except for the robot itself. Furthermore, we adapt the network to preserve older states while still reacting to new events, obtaining a correct obstacle memory at any given point in time. For this purpose, we control a kinematic chain, the robot arm. Hereby, we use two sensor networks for proprioception and exteroception.
- Multi-View 3D Reconstruction with Self-Organizing Maps on Event-Based DataInfoDetails
Lea Steffen, Stefan Ulbrich, Arne Roennau, Rüdiger Dillmann, 2019
Depth perception is crucial for many applications including robotics, UAV and autonomous driving. The visual sense, as well as cameras, map the 3D world on a 2D representation, losing the dimension representing depth. A way to recover 3D information from 2D images is to record and join data from multiple viewpoints. In case of a stereo setup, 4D data is gained. Existing methods to recover 3D information are computationally expensive. We propose a new, more intuitive method to recover 3D objects out of event-based stereo data, by using a Self-Organizing Map to solve the correspondence problem and establish a structure similar to a voxel grid. Our approach, as it is also computationally expensive, copes with performance issues by massive parallelization. Furthermore, the relatively small voxel grid makes this a memory friendly solution. This technique is very powerful as it does not need any prior knowledge of extrinsic and intrinsic camera parameters. Instead, those parameters and also the lens distortion are learned implicitly. Not only do we not require a parallel camera setup, as many existing methods, we do not even need any information about the alignment at all. We evaluated our method in a qualitative analysis and finding image correspondences.
- Multi-Modal Motion Activation for Robot Control Using Spiking NeuronsDetails
J. Camilo Vasquez Tieck, Lea Steffen, Jacques Kaiser, Arne Roennau, Rüdiger Dillmann, 2018
- Controlling a Robot Arm for Target Reaching without Planning Using Spiking NeuronsDetails
J. Camilo Vasquez Tieck, Lea Steffen, Jacques Kaiser, Arne Roennau, Rüdiger Dillmann, 2018
- Microsaccades for Neuromorphic Stereo VisionDetails
Jacques Kaiser, Jakob Weinland, Philip Keller, Lea Steffen, J. Camilo Vasquez Tieck, Daniel Reichard, Arne Roennau, Jorg Conradt, Rüdiger Dillmann, 2018
- Towards a Vision-Based Concept for Gesture Control of a Robot Providing Visual FeedbackDetails
Gabriele Bolano, Atanas Tanev, Lea Steffen, Arne Roennau, Ruediger Dillmann, 2018
Export Suchergebnis .bib
Kontakt
Telefon: +49 721 9654-218
E-Mail: steffen@ fzi.de- Neuromorphic Stereo Vision: A Survey of Bio-Inspired Sensors and AlgorithmsInfoDetails