Learning to reproduce visually similar movements by minimizing event-based prediction error

Resource type
Conference
Author(s)
Kaiser, Jacques and Melbaum, Svenja and Tieck, J Camilo Vasquez and Roennau, Arne and Butz, Martin V and Dillmann, Rudiger
Year
2018
Pages
260--267
Book title
2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob)
Editor
IEEE
Abstract
Prediction is believed to play an important role in the human brain. However, it is still unclear how predictions are used in the process of learning new movements. In this paper, we present a method to learn movements from visual prediction. The method consists of two phases: learning a visual prediction model for a given movement, then minimizing the visual prediction error. The visual prediction model is learned from a single demonstration of the movement where only visual input is sensed. Unlike previous work, we represent visual information with event streams as provided by a Dynamic Vision Sensor. This allows us to only process changes in the environment instead of complete snapshots using spiking neural networks. By minimizing the prediction error, movements visually similar to the demonstration are learned. We evaluate our method by learning simple movements from human demonstrations on different simulated robots. We show that the definition of the visual prediction error greatly impacts movements learned by our method.
Online Sources
https://ieeexplore.ieee.org/abstract/document/8487959
DOI
10.1109/BIOROB.2018.8487959
Research focus
Service Robotics and Mobile Manipulation
Project
Human Brain Project
Download .bib
Download .bib
Published by
Jacques Kaiser