Lars Böcking (B.Sc.)
Wissenschaftlicher Mitarbeiter
Werdegang
Lars Böcking studierte Wirtschaftsingenieurwesen am Karlsruher Institut für Technologie (KIT). Seine Abschlussarbeit am Institut für Angewandte Informatik und Formale Beschreibungsverfahren (AIFB) erarbeitete er in Kooperation mit dem Beijing Institute of Technology. Während seines dreimonatigen Aufenthaltes in Peking entwickelte er ein Evaluationskonzepts für Methoden zur Vernetzung von Geschäftsprozessen im Kontext der Digitalisierung.
Seit April 2020 ist er als wissenschaftlicher Mitarbeiter in der Abteilung "Wissensmanagement" bei Prof. Dr. York Sure Vetter am FZI angestellt. In seiner aktuellen Forschung beschäftigt sich Lars Böcking mit informationsgetriebenen Entscheidungen im Reinforcement Learning. Über die systematische Verarbeitung von Information können hierbei zuvor subjektive von Menschen getroffene Entscheidungen an ein System ausgelagert werden.
Publikationen
Zeitungs- oder Zeitschriftenartikel (2)
- Towards Modular Neural Architecture SearchInfoDetails
Lars Boecking, Patrick Philipp and Cedric Kulbach, 2020
In this work we present Modular Neural Architecture Search (ModNAS), which aims at reducing the complexity of the underlying search space by fostering the reuse of successful neural cells. We define a new modularized search space, which enables efficient search based on strong limitation to predefined building block as well as transferability to novel, unseen tasks. We present a preliminary evaluation of ModNAS for CIFAR-10, CIFAR-100 and Fashion-MNIST based on modules from the NAS-Bench-101 benchmark, where we alternate between random and pre-ranked retrieval based on documented accuracies of CIFAR-10. The results are promising in that we retrieve competitive architectures in 6 GPU hours, which highlights the potential of sophisticated ranking approaches for modules in our framework.
- Recommending safe actions by learning from sub-optimal demonstrationsInfoDetails
Lars Böcking, Patrick Philipp, 2020
Clinical pathways describe the treatment procedure for a patient from a medical point of view. Based on the patient's condition, a decision is made about the next actions to be carried out. Such recurring sequential process decisions could well be outsourced to a reinforcement learning agent, but the patient's safety should always be the main consideration when suggesting activities. The development of individual pathways is also cost and time intensive, therefore a smart agent could support and relieve physicians. In addition, not every patient reacts in the same way to a clinical intervention, so the personalization of a clinical pathway should be given attention. In this paper we address with the fundamental problem that the use of reinforcement learning agents in the specification of clinical pathways should provide an individual optimal proposal within the limits of safety constraints. Imitating the decisions of physicians can guarantee safety but not optimality. Therefore, we present an approach that ensures compliance with health critical rules without limiting the exploration of the optimum. We evaluate our approach on open source gym environment where we are able to show that our adaptation of behavior cloning not only adheres better to safety regulations, but also manages to better explore the space of the optimum in the collective rewards.
Export Suchergebnis .bib
Kontakt
E-Mail: boecking@ fzi.de
- Towards Modular Neural Architecture SearchInfoDetails