Keynote Speakers

Prof. Dongheui Lee

Dongheui Lee is Full Professor of Autonomous Systems at Institute of Computer Tech, Faculty of Electrical Engineering and Information Technology, TU Wien. She is also leading a Human-centered assistive robotics group at the German Aerospace Center (DLR). Her research interests include human motion understanding, human robot interaction, machine learning in robotics, and assistive robotics. Prior to her appointment at TU Wien, she was Assistant Professor and Associate Professor at Technical University of Munich (2009-2022), Project Assistant Professor at the University of Tokyo (2007-2009), and a research scientist at the Korea Institute of Science and Technology (KIST) (2001-2004). She obtained a PhD degree (2007) from the department of Mechano-Informatics, University of Tokyo in Japan and B.S. (2001) and M.S. (2003) degrees in mechanical engineering at Kyung Hee University, Korea. She was awarded a Carl von Linde Fellowship at the TUM Institute for Advanced Study (2011) and a Helmholtz professorship prize (2015). She has served as Senior Editor and a founding member of IEEE Robotics and Automation Letters (RA-L) and Associate Editor for the IEEE Transactions on Robotics.

Talk: TBD

Abstract:

TBD


Dr. Sylvain Calinon 

Dr Sylvain Calinon is a Senior Research Scientist at the Idiap Research Institute and a Lecturer at the Ecole Polytechnique Fédérale de Lausanne (EPFL). He heads the Robot Learning & Interaction group at Idiap, with expertise in human-robot collaboration, robot learning from demonstration and model-based optimization. The approaches developed in his group can be applied to a wide range of applications requiring manipulation skills, with robots that are either close to us (assistive and industrial robots), parts of us (prosthetics and exoskeletons), or far away from us (shared control and teleoperation). Website: https://calinon.ch

Robot learning from few examples by exploiting the structure and geometry of data

Abstract:

A wide range of applications can benefit from robots acquiring manipulation skills by interaction with humans. In this presentation, I will discuss the challenges that such a learning process encompasses, including representations for manipulation skills that can exploit the structure and geometry of the acquired data in an efficient way, the development of optimal control strategies that can exploit variations in manipulation skills, and the development of intuitive interfaces to acquire meaningful demonstrations.

From a machine learning perspective, the core challenge is that robots can only rely on a small number of demonstrations. The good news is that we can exploit bidirectional human-robot interaction as a way to collect better data. We can also rely on various structures that remain the same within a wide range of robotic tasks. Such structures include geometrical aspects, by extending learning strategies that have been originally developed for standard Euclidean space to Riemannian manifolds. In robotics, these manifolds include orientation, manipulability ellipsoids, graphs and subspaces. Another type of structure that we study relates to the organization of data as multidimensional arrays (also called tensors). These data appear in various robotic tasks, either as the natural organization of sensorimotor data (tactile arrays, images, kinematic chains), or as the result of preprocessing steps (moving time windows, covariance features). Tensor factorization techniques (also called tensor methods or multilinear algebra) can be used to learn from only few tensor datapoints, by exploiting the multidimensional nature of the data.

Another key challenge in robot skill acquisition is to link the learning aspects to the control aspects. Optimal control provides a framework that allows us to take into account the possible variations of a task, the uncertainty of sensorimotor information, and the movement coordination patterns, by relying on well grounded control techniques such as linear quadratic tracking, differential dynamic programming, and their extensions to model predictive controllers. The formulation draws explicit links with learning techniques, as we can recast these techniques as Gauss-Newton optimization problems formulated at trajectory level (in both control space and state space), which facilitates the links to probabilistic approaches.