The 35th International Conference on Computer Animation and Social Agents
casa2022_conf@outlook.com

Speaker on CASA 2022

HomeAbout Us

Keynote Speaker

A person wearing glasses  Description automatically generated with medium confidence
Professor Chenguang (Charlie) Yang
Bristol Robotics Laboratory, University of the West of England, UK
Weblink: https://people.uwe.ac.uk/Person/CharlieYang


Speech Title: Human Robot Interactive Learning and Control
Abstract:
This talk will introduce our advance in the field of robot skill learning and human-robot interactive control. We use control theory to model the control mechanism of motor neurons to assist us developing human-like robot controllers so that the robot can realize variable impedance control to adaptively physically-interact with the changing environment. We further propose a multi-task impedance control and impedance learning method used on a human-like manipulator with redundant degrees of freedom to achieve compliant human-robot interaction motor control. Learning from human demonstration methods are generally used to efficiently transfer modularized skills to robots using multi-modal information such as surface electromyography signals and contact forces, enhancing the effectiveness of skill reproduction in different situations. We have also developed an enhanced neural-network shared control system for teleoperation, which uses the redundancy of joint space to avoid collisions automatically. The operator does not need to pay attention to possible collisions during manipulation. Besides, with the help of deep learning, we designed a tool power compensation system for teleoperation surgery, thereby enhancing the performance of the force and motion tracking at both ends of the teleoperation system. Furthermore, this talk will also introduce our research on the topics of human-robot collaboration and skill generalization.

Biography: Professor Chenguang (Charlie) Yang is the leader of Robot Teleoperation Group of Bristol Robotics Laboratory, a Corresponding Co-Chair of the Technical Committee on Collaborative Automation for Flexible Manufacturing (CAFM), IEEE Robotics and Automation Society. He received PhD degree from the National University of Singapore (2010) and performed postdoctoral research at Imperial College London. He is a recipient of the prestigious IEEE Transactions on Robotics Best Paper Award (2012) and IEEE Transactions on Neural Networks and Learning Systems Outstanding Paper Award (2022) as lead authors. He has been awarded EPSRC Innovation Fellowship and EU FP-7 Marie Curie International Incoming Fellowship. He is a Fellow of British Computer Society and Higher Education Academy.  He serves as Associate Editor of a number of leading international journals including IEEE Transactions on Robotics. His research interest lies in human robot interaction and intelligent system design.

 

 

A person wearing glasses  Description automatically generated with medium confidence
Kai Xu
National University of Defense Technology, China

 

Kai Xu is a Professor at the School of Computer, National University of Defense Technology, where he received his Ph.D. in 2011. He is currently an adjunct professor of Simon Fraser University. He was a visiting scholar at Princeton University during 2017-2018. His research interests include geometric modeling and shape analysis, especially on data-driven approaches to the problems in those directions, as well as 3D vision and its robotic applications. He has published 100+ papers, including 20+ SIGGRAPH/TOG papers. He co-organized several SIGGRAPH Asia courses, CVPR tutorials and Eurographics STARs. He serves on the editorial board of ACM Transactions on Graphics, Computer Graphics Forum, Computers & Graphics, and The Visual Computer. He also served as program co-chair of CAD/Graphics 2017, ICVRV 2017 and ISVC 2018, as well as PC member for several prestigious conferences including SIGGRAPH, SIGGRAPH Asia, Eurographics, SGP, PG, etc. His research work can be found in his personal website: www.kevinkaixu.net

Title: Online Dense Reconstruction under Fast Camera Motion

Abstract: Online reconstruction based on RGB-D sequences has thus far been restrained to relatively slow camera motions (<1m/s). Under very fast camera motion (e.g., 3m/s), the reconstruction can easily crumble even for the state-of-the-art methods. Fast motion brings two challenges to depth fusion: 1) the high nonlinearity of camera pose optimization due to large inter-frame rotations and 2) the lack of reliably trackable features due to motion blur. In this talk, I will introduce a new method to tackle the difficulties of fast-motion camera tracking in the absence of inertial measurements using random optimization. The method attains good quality pose tracking under fast camera motion (up to 4m/s) in a real-time framerate without including loop closure or global pose optimization. I will also present our recent progress on extending the method to integrate IMU sensor for online reconstruction under even faster camera motions. This involves how to handle the much higher dimension of the state space for which we present a robust method based on active subspace random optimization.