Robust and Reactive Decision-making and AI Planning of Collaborative and Agile Robots in Complex Environments

This research direction focuses on formal methods and decision-making algorithms of dynamic terrestrial locomotion and aerial manipulation in complex and human-surrounded environments. We aim at scalable planning and decision algorithms enabling heterogeneous robot teammates to dynamically interact with unstructured environments and collaborate with humans.

Read more...

This research direction focuses on formal methods and decision-making algorithms of dynamic terrestrial locomotion and aerial manipulation in complex and human-surrounded environments. We aim at scalable planning and decision algorithms enabling heterogeneous robot teammates to dynamically interact with unstructured environments and collaborate with humans.
Robust and Reactive Decision-making and AI Planning of Collaborative and Agile Robots in Complex Environments
Distributed and Robust Trajectory Optimization of Contact-Physics-Embedded Manipulation Skills

Trajectory optimization through contact is a powerful set of algorithms for designing dynamic robot motions involving physical interaction with the environment. However, the trajectories output by these algorithms can be difficult to execute in practice due to several common sources of uncertainty: robot model errors, external disturbances, and imperfect surface geometry and friction estimates. 

Read more

Reactive Task and Motion Planning for Robust Whole-Body Locomotion in Constrained Environments

This project takes a step toward formally synthesizing high-level reactive planners for unified legged and armed locomotion in constrained environments. We formulate a two-player temporal logic game between the contact planner and its possibly adversarial environment.

Read more...

Terrain-aware Locomotion over Granular Media: Morphologically Reconfigurable Robotic Limb Design and Terrain Classification for Versatile Locomotion Planning

Locomotion over cluttered outdoor environments requires the contact foot to be aware of terrain geometries, stiffness, and granular media properties. Although current internal and external mechanical and visual sensors have enabled high-performance state estimation for legged locomotion, rich ground contact sensing capability is still a bottleneck to improved control in austere conditions.

Read more

Verifiable and Safe Reinforcement Learning of Contact-rich Robotic Tasks in Complex Environments
Reinforcement learning is a promising approach to learn control policies for complex robotics tasks where physics-model-based approaches often fail to generalize. However, despite significant recent progress in reinforcement learning algorithms, formally guaranteed safety of the learned control policies remains a key challenge for robots with highly complex dynamics and contact-rich tasks. This challenge is usually overcome in practice by manually encoding costs or constraints to enforce optimizers towards safe regions of state-space or by relying on ahead-of-time verification of desired safety properties. This proposal aims at a verifiable and scalable reinforcement learning approach that online assures the safety of robot contact decisions by overriding the learned control policy with minimal interference and ensures correct robot actions even in unforeseen, dynamically changing, and contact-rich environments.