M.Sc. Jacques Kaiser
Ehemaliger Mitarbeiter
Werdegang
Jacques Kaiser is pursuing a PhD at FZI since the 1st of August 2015 within the Human Brain Project.
He has studied for his Bachelor at Strasbourg University, and graduated during his Erasmus in Durham, England. After a one-year journey to Australia where he worked as a web developer for a startup, he obtained his Master degree from the international program of Grenoble University, specialized in Computer Graphics, Vision and Robotics. He completed his Master thesis at INRIA Grenoble, working on perception for drones.
More information are available on his personal webpage.Publications
Articles (8)
- Embodied Event-Driven Random BackpropagationDetails
Jacques Kaiser and Alexander Friedrich and J. Camilo Vasquez Tieck and Daniel Reichard and Arne Roennau and Emre Neftci and Rüdiger Dillmann, 2019
- Embodied Synaptic Plasticity With Online Reinforcement LearningInfoDetails
Kaiser, Jacques and Hoff, Michael and Konle, Andreas and Vasquez Tieck, J. Camilo and Kappel, David and Reichard, Daniel and Subramoney, Anand and Legenstein, Robert and Roennau, Arne and Maass, Wolfgang and Dillmann, Rüdiger, 2019
The endeavor to understand the brain involves multiple collaborating research fields. Classically, synaptic plasticity rules derived by theoretical neuroscientists are evaluated in isolation on pattern classification tasks. This contrasts with the biological brain which purpose is to control a body in closed-loop. This paper contributes to bringing the fields of computational neuroscience and robotics closer together by integrating open-source software components from these two fields. The resulting framework allows to evaluate the validity of biologically-plausibe plasticity models in closed-loop robotics environments. We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following. We show that SPORE is capable of learning to perform policies within the course of simulated hours for both tasks. Provisional parameter explorations indicate that the learning rate and the temperature driving the stochastic processes that govern synaptic learning dynamics need to be regulated for performance improvements to be retained. We conclude by discussing the recent deep reinforcement learning techniques which would be beneficial to increase the functionality of SPORE on visuomotor tasks.
- Running Large-Scale Simulations on the Neurorobotics Platform to Understand Vision – The Case of Visual CrowdingInfoDetails
Bornet, Alban and Kaiser, Jacques and Kroner, Alexander and Falotico, Egidio and Ambrosano, Alessandro and Cantero, Kepa and Herzog, Michael H. and Francis, Gregory, 2019
Traditionally, human vision research has focused on specific paradigms and proposed models to explain very specific properties of visual perception. However, the complexity and scope of modern psychophysical paradigms undermine the success of this approach. For example, perception of an element strongly deteriorates when neighboring elements are presented in addition (visual crowding). As it was shown recently, the magnitude of deterioration depends not only on the directly neighboring elements but on almost all elements and their specific configuration. Hence, to fully explain human visual perception, one needs to take large parts of the visual field into account and combine all the aspects of vision that become relevant at such scale. These efforts require sophisticated and collaborative modeling. The Neurorobotics Platform (NRP) of the Human Brain Project offers a unique opportunity to connect models of all sorts of visual functions, even those developed by different research groups, into a coherently functioning system. Here, we describe how we used the NRP to connect and simulate a segmentation model, a retina model, and a saliency model to explain complex results about visual perception. The combination of models highlights the versatility of the NRP and provides novel explanations for inward-outward anisotropy in visual crowding.
- The Neurorobotics Platform for Teaching -- Embodiment Experiments with Spiking Neural Networks and Virtual RobotsDetails
Tieck, J. Camilo Vasquez and Kaiser, Jacques and Steffen, Lea and Schulze, Martin and von Arnim, Axel and Reichard, Daniel and Roennau, Arne and Dillmann, Rüdiger, 2019
- Synaptic Plasticity Dynamics for Deep Continuous Local LearningDetails
Kaiser, Jacques and Mostafa, Hesham and Neftci, Emre, 2018
- Scaling up liquid state machines to predict over address events from dynamic vision sensorsDetails
Jacques Kaiser and Rainer Stal and Anand Subramoney and Arne Roennau and Rüdiger Dillmann, 2017
- Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics PlatformInfoDetails
Falotico, Egidio and Vannucci, Lorenzo and Ambrosano, Alessandro and Albanese, Ugo and Ulbrich, Stefan and Vasquez Tieck, Juan Camilo and Hinkel, Georg and Kaiser, Jacques and Peric, Igor and Denninger, Oliver and Cauli, Nino, 2017
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).
1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments. - Simultaneous State Initialization and Gyroscope Bias Calibration in Visual Inertial Aided NavigationDetails
Jacques Kaiser and A. Martinelli and F. Fontana and D. Scaramuzza, 2016
Conference Proceedings (9)
- Advanced Usability Through Constrained Multi Modal Interactive Strategies: The CookieBotDetails
Bolano, Gabriele and Becker, Pascal and Kaiser, Jacques and Roennau, Arne and Dillmann, Ruediger, 2019
- Microsaccades for Neuromorphic Stereo VisionInfoDetails
Kaiser, Jacques and Weinland, Jakob and Keller, Philip and Steffen, Lea and Tieck, J Camilo Vasquez and Reichard, Daniel and Roennau, Arne and Conradt, Jörg and Dillmann, Rüdiger, 2018
Depth perception through stereo vision is an important feature of biological and artificial vision systems. While biological systems can compute disparities effortlessly, it requires intensive processing for artificial vision systems. The computing complexity resides in solving the correspondence problem -- finding matching pairs of points in the two eyes. Inspired by the retina, event-based vision sensors allow a new constraint to solve the correspondence problem: time. Relying on precise spike-time, spiking neural networks can take advantage of this constraint. However, disparities can only be computed from dynamic environments since event-based vision sensors only report local changes in light intensity. In this paper, we show how microsaccadic eye movements can be used to compute disparities from static environments. To this end, we built a robotic head supporting two Dynamic Vision Sensors (DVS) capable of independent panning and simultaneous tilting. We evaluate the method on both static and dynamic scenes perceived through microsaccades. This paper demonstrates the complementarity of event-based vision sensors and active perception leading to more biologically inspired robots.
- Microsaccades for asynchronous feature extraction with spiking networksInfoDetails
Kaiser, Jacques and Lindner, Gerd and Tieck, J Camilo Vasquez and Schulze, Martin and Hoff, Michael and Roennau, Arne and Dillmann, Rüdiger, 2018
While extracting spatial features from images has been studied for decades, extracting spatio-temporal features from event streams is still a young field of research. A particularity of event streams is that the same network architecture can be used for recognition of static objects or motions. However, it is not clear what features provide a good abstraction and in what scenario. In this paper, we evaluate the quality of the features of a spiking HMAX architecture by computing classification performance before and after each layer. %% Three different classifiers are used: a linear Support Vector Machine (SVM), an Histogram and a Liquid State Machine (LSM). We demonstrate the abstraction capability of classical edge features, as were found in the V1 area of the visual cortex, combined with fixational eye movements. Specifically, our performance on N-Caltech101 dataset outperforms previously reported $F_1$ score on Caltech101, with a similar architecture but without a STDP learning layer. %% Indeed, by benchmarking a spiking HMAX architecture on the N-Caltech101 dataset However, we show that the same edge features do not manage to abstract motions observed with a static DVS from the DvsGesture dataset. %% In our experiments, pure unsupervised STDP learning in the S2 layer did not lead to the learning of stable and discriminative patterns. Additionally, we show that liquid state machines are a promising computational model for the classification of DVS data with temporal dynamics. This paper is a step forward towards understanding and reproducing biological vision.
- Learning to reproduce visually similar movements by minimizing event-based prediction errorInfoDetails
Kaiser, Jacques and Melbaum, Svenja and Tieck, J Camilo Vasquez and Roennau, Arne and Butz, Martin V and Dillmann, Rudiger, 2018
Prediction is believed to play an important role in the human brain. However, it is still unclear how predictions are used in the process of learning new movements. In this paper, we present a method to learn movements from visual prediction. The method consists of two phases: learning a visual prediction model for a given movement, then minimizing the visual prediction error. The visual prediction model is learned from a single demonstration of the movement where only visual input is sensed. Unlike previous work, we represent visual information with event streams as provided by a Dynamic Vision Sensor. This allows us to only process changes in the environment instead of complete snapshots using spiking neural networks. By minimizing the prediction error, movements visually similar to the demonstration are learned. We evaluate our method by learning simple movements from human demonstrations on different simulated robots. We show that the definition of the visual prediction error greatly impacts movements learned by our method.
- Controlling a robot arm for target reaching without planning using spiking neuronsDetails
Tieck, J Camilo Vasquez and Steffen, Lea and Kaiser, Jacques and Roennau, Arne and Dillmann, Rüdiger, 2018
- Multi-modal motion activation for robot control using spiking neuronsDetails
Tieck, J Camilo Vasquez and Steffen, Lea and Kaiser, Jacques and Roennau, Arne and Dillmann, Rüdiger, 2018
- Spiking Convolutional Deep Belief NetworksDetails
Jacques Kaiser and David Zimmerer and J. Camilo Vasquez Tieck and Stefan Ulbrich and Arne Roennau and Rüdiger Dillmann, 2017
- Semi-Supervised Spiking Neural Network for One-Shot Object Appearance LearningDetails
Peric, Igor and Hangu, Robert and Kaiser, Jacques and Ulbrich, Stefan and Roennau, Arne and Zöllner, J and Dillmann, Rüdiger, 2017
- Towards a Framework for End-To-End Control of a Simulated Vehicle with Spiking Neural NetworksDetails
Jacques Kaiser and Juan Camilo Vasquez Tieck and Johannes Christian Hubschneider and Peter Wolf and Michael Weber and Michael Hoff and Alexander Friedrich and Konrad Wojtasik and Arne Roennau and Ralf Kohlhaas and Rüdiger Dil, 2016
Export search result as .bib
Kontakt
Telefon: +49 721 9654-392
E-Mail: jkaiser@ fzi.de- Embodied Event-Driven Random BackpropagationDetails