M.Sc. Gabriele Bolano
Wissenschaftlicher Mitarbeiter
Werdegang
Gabriele Bolano has studied for from 2009 till 2016 at the University of Pisa. He obtained his Bachelor Degree in Computer Science in 2012 and his Master Degree in Robotics and Automation in 2016.
His research topics are mainly related to human-robot collaboration and interaction.
From April 2017 he is a Research Scientist at the Department of Interactive Diagnostic and Service Systems (IDS).
Publikationen
Zeitungs- oder Zeitschriftenartikel (5)
- Virtual Reality for Offline Programming of Robotic Applications with Online Teaching MethodsInfoDetails
G. Bolano, A. Roennau, R. Dillmann, 2020
Robotic systems are complex and commonly require experts to program the motions and interactions between all the different components. Operators with programming skills are usually needed to make the robot perform a new task or even to apply small changes in its current behavior. For this reason many tools have been developed to ease the programming of robotic systems. Online programming methods rely on the use of the robot in order to move it to the desired configurations. On the other hand, simulation-based methods enable the offline teaching of the needed program without involving the actual hardware setup. Virtual Reality (VR) allows the user to program a robot safely and effortlessly, without the need to move the real manipulator. However, online programming methods are needed for on-site adjustments, but a common interface between these two methods is usually not available. In this work we propose a VR-based framework for programming robotic tasks. The system architecture deployed allows the integration of the defined programs into existing tools for online teaching and execution on the real hardware. The proposed virtual environment enables the intuitive definition of the entire task workflow, without the need to involve the real setup. The bilateral communication between this component and the robotic hardware allows the user to introduce changes in the virtual environment, as well into the real system. In this way, they can both be updated with the latest changes and used in a interchangeable way, exploiting the advantages of both methods in a flexible manner.
- Transparent Robot Behavior Using Augmented Reality in Close Human-Robot InteractionInfoDetails
G. Bolano, C. Juelg, A. Roennau, R. Dillmann, 2019
Most robots consistently repeat their motion without changes in a precise and consistent manner. But nowadays there are also robots able to dynamically change their motion and plan according to the people and environment that surround them. Furthermore, they are able to interact with humans and cooperate with them. With no information about the robot targets and intentions, the user feels uncomfortable even with a safe robot. In close human-robot collaboration, it is very important to make the user able to understand the robot intentions in a quick and intuitive way. In this work we have developed a system to use augmented reality to project directly into the workspace useful information. The robot intuitively shows its planned motion and task state. The AR module interacts with a vision system in order to display the changes in the workspace in a dynamic way. The representation of information about possible collisions and changes of plan allows the human to have a more comfortable and efficient interaction with the robot. The system is evaluated in different setups.
- Advanced Usability Through Constrained Multi Modal Interactive Strategies: The CookieBotInfoDetails
G. Bolano, P. Becker, J. Kaiser, A. Roennau, R. Dillmann, 2019
Service robots are becoming able to perform a variety of tasks and they are currently used for many different applications. For this reason people with different backgrounds and also without robotic experience need to interact with them. Enabling the user to control the motion of the robot end-effector, it is important to provide an easy and intuitive interface. In this work we propose an intuitive method for the control of a robot TCP position and orientation. This is done taking into account the robot kinematics in order to avoid dangerous configuration and defining rotational constraints. The user is enabled to interact with the robot and control its end-effector using a set of objects tracked by a camera system. The autonomy level of the robot changes depending on the different phases of the interaction for a better efficiency. An intuitive GUI has been developed to ease the interaction and help the user to achieve a better precision in the control. This is possible also through the scaling of the tracked motion, which is represented as visual feedback. We tested the system through multiple experiments that took into account how people with no experience interact with the robot and the precision of the method.
- Transparent Robot Behavior by Adding Intuitive Visual and Acoustic Feedback to Motion ReplanningInfoDetails
G. Bolano, A. Roennau, R. Dillmann , 2018
Nowadays robots are able to work safely close to humans. They are light-weight, intrinsically safe and capable of avoiding obstacles as well as understand and predict human motions. In this collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to robot understanding and prediction of the human behavior, allowing the robot to replan its motion trajectories. This work is focused on the communication of the robot's intentions to the human to make its goals and planned trajectories easily understandable. Visual and acoustic information has been added to give the human an intuitive feedback to immediately understand the robot's plan. This allows a better interaction and makes the humans feel more comfortable, without any feeling of anxiety related to the unpredictability of the robot motion. Experiments have been conducted in a collaborative assembly scenario. The results of these tests were collected in questionnaires, in which the humans reported the differences and improvements they experienced using the feedback communication system.
- Towards a Vision-Based Concept for Gesture Control of a Robot Providing Visual FeedbackInfoDetails
G. Bolano, A. Tanev, L. Steffen, A. Roennau, R. Dillmann, 2018
Human-Robot Interaction (HRI) plays an important and growing role, both in industrial applications and in game development. Over recent years, robots can be controlled by gestures via special devices, but these methods are not intuitive and require usually a learning phase. This paper proposes an intuitive method for controlling a robot end-effector using human gestures. Vision based techniques were used to track the position of the user's hand, which is directly translated in control signals. The use of a 3D camera sensor allows to easily control the robot tool position in all dimensions. Our approach includes a Graphical User Interface (GUI), to ease the control through interactive, visual feedback. This interface, including 3D markers, text messages and the visualization of the user's point cloud and the robot model, enables a control mechanism which does not require a teaching phase. Our approach was tested and evaluated using realistic experiments to prove that our approach works reliably and is extremely intuitive.
Export Suchergebnis .bib
Kontakt
Telefon: +49 721 9654-215
E-Mail: bolano@ fzi.de- Virtual Reality for Offline Programming of Robotic Applications with Online Teaching MethodsInfoDetails