* [available: m/n] denotes the availability of the current project. m is the current available positions, n is the total available positions.

**In filling the survey please input the project number in the “Which project are you interested in” question.

1. QuadNav – Quadrupedal Navigation [availability: 0/4]
  • Mentor: Max Asselmeier (mass@gatech.edu)
  • Project description: The quadrupedal navigation (QuadNav) project will involve establishing a perception-informed navigation framework onboard our Unitree Go2 quadruped platform. To do so, we will catalog onboard sensor performance including depth cameras and Lidars, and we will experimentally validate pre-built collision avoidance and recovery behavior performance. Then, we will set up a navigation pipeline using the Robot Operating System (ROS). This navigation pipeline will include global and local planning, a simultaneous localization and mapping (SLAM) process, and other quadruped-specific modules including terrain traversability estimation. Four students: two students working on software (SLAM + navigation) and two students working on hardware (sensor mounting + setup).
  • Ideal team size: 4
  • Expected Outcomes: At the end of the Fall semester, I expect to deploy our navigation framework within indoor settings to allow the quadruped to maneuver through floors of academic buildings while avoiding previously unseen and possibly moving obstacles. At the end of the Spring semester, I aim to include quadruped-specific modules for footstep planning and terrain analysis to prevent stepping on dangerous or unstable regions.
  • Necessary skills: Linux, C++, Python
  • Preferred skills: Robot Operating System (ROS)
  • Grading Info: Grading will be based on attendance, communication, and participation.
  • Reference:
    • Planners that we aim to deploy the quadrupeds on are similar to the following:
    • [1] Potential Gap: A Gap-Informed Reactive Policy for Safe Hierarchical Navigation. Ruoyang Xu; Shiyu Feng; Patricio A. Vela. 2021. Link: https://ieeexplore.ieee.org/document/9513583
    • [2] GPF-BG: A Hierarchical Vision-Based Planning Framework for Safe Quadrupedal Navigation. Shiyu Feng; Ziyi Zhou; Justin S. Smith; Max Asselmeier; Ye Zhao; Patricio A. Vela. 2023. Link: https://ieeexplore.ieee.org/document/10160804
    • [3] Dynamic Gap: Formal Guarantees for Safe Gap-based Navigation in Dynamic Environments. Max Asselmeier, Ye Zhao, Patricio A. Vela. 2022. Link: https://arxiv.org/abs/2210.05022
    • Ultimately, we want to develop an autonomy framework that could be deployed for tasks such as the DARPA SubT Challenge:
    • [4] DARPA Subterranean (SubT) Challenge. Link: https://www.darpa.mil/program/darpa-subterranean-challenge

2. Humanoid Loco-manipulation Skill Learning [availability: 0/10]
  • Mentor name: Zhaoyuan Gu (zgu78@gatech.edu)
  • Project Description: In this project, we will explore the state-of-the-art machine-learning approaches that enable our humanoid robot to perform a series of locomotion and manipulation tasks, such as pushing through a spring-loaded door. Instead of traditional reinforcement learning, we will explore the diffusion model, an imitation learning technique that has been shown to achieve versatile skills.
  • Ideal team size: 10
  • Expected Outcomes: We will have three major milestones. 1. Generate data of the robot executing our desired loco-manipulation tasks via our existing model-based controller. 2. Implement and train the diffusion model policy that learns from this offline-collected dataset. 3. Verify in simulation and deploy on our humanoid robot, Digit.
  • Necessary skills: Python, ROS.
  • Preferred skills: Robot Kinematics, Dynamics, and Control. Pytorch. 
  • Reference
    • Our open-sourced controller for Digit locomotion. https://github.com/GTLIDAR/digit_controller
    • Diffusion model for arm manipulation. https://github.com/real-stanford/diffusion_policy

3. Digit Humanoid Robot Loco-Manipulation and Its Integration with a Tendon-Driven Soft Arm [availability: 1/5]
  • Mentor: Fukang Liu (fukangliu@gatech.edu)
  • Project Description: This study aims to develop a hybrid triple-arm manipulation system designed for dynamic collaborative manipulation tasks in challenging scenarios. The system comprises a tendon-driven soft robotic arm, two rigid arms of the humanoid biped robot Digit. This configuration harnesses the strengths of mobile robots, soft robots, and the triple-arm setup. Currently, we are working on model free humanoid loco-manipulation based on trajectory optimization. In the short time, our objective is to combine model-based trajectory optimization with reinforcement learning (RL) to achieve robust whole-body loco-manipulation. We have also modeled the soft arm in the MuJoCo simulation environment. Our long time goal is to develop an innovative motion planning framework for sequential collaborative tripe-arm loco-manipulation tasks. The framework is designed to orchestrate a sequence of movements, enabling logical reasoning about operation sequences and constraints. It will enhance the coordination of multiple robotic arms and incorporate safeguards against self-collision. To realize safe and effective cooperative manipulation, it is essential to create a comprehensive body shape planner. This planner will be responsible for the trajectory optimization of a soft continuum arm, in concert with the two rigid arms, on the humanoid biped robot Digit.
  • Ideal Team Size: 5
  • Expected Outcomes: The expectation is to implement a reasonable baseline algorithm(s) and improve it(them) based on the humanoid Digit robot or soft arm loco-manipulation tasks. 
  • Necessary Skills: Python, Deep Learning, Basic knowledge of robot kinematics and dynamics
  • Preferred Skills: Reinforcement Learning, Control
  • Grading info: proposal (2 pages); midterm (4 pages); final report (6 pages); and presentation sessions.
  • References: https://learning-humanoid-locomotion.github.io/
    https://expressive-humanoid.github.io/
    https://github.com/loco-3d/crocoddyl
    https://ieeexplore.ieee.org/abstract/document/10160562
    https://humanoid-ai.github.io/

4. RL for Robot Learning [availability: 0/2]
  • Mentor: Feiyang Wu (feiyangwu@gatech.edu)
  • Project Description: A Reinforcement Learning (RL) trained humanoid agent (the Digit robot in our lab) on the Nvidia Omniverse Isaac Lab platform. Various Inverse RL algorithms will be considered. This will be expanding previous projects (see reference).
  • Ideal Team Size: 2
  • Expected Outcomes: This will be a longtime project and will lead to a top journal publication.
  • Necessary Skills: Python, Numpy, Machine Learning, Pytorch
  • Preferred Skills: Reinforcement Learning, Robot Learning, Omniverse Isaac Sim
  • Grading Info: Depends on participation and efforts.
  • References:
    • For Inverse RL: https://arxiv.org/abs/2305.14608.
    • For IRL on robot: https://arxiv.org/abs/2309.16074

5. Deploy RL agents on Hardware [availability: 1/2]
  • Mentors: Feiyang Wu, Zhaoyuan Gu (feiyangwu@gatech.edu, zgu78@gatech.edu)
  • Project Description: This project will focus on deploying trained agents on real robotic systems, particularly for the Digit robot (check out our lab website). Work involve verifying stability of the trained agent in simulators, package and consolidate code pipeline, and conduct tests on hardware.
  • Ideal Team Size: 2
  • Expected Outcomes: Efficient working pipeline for model deployment. If we have more time, training and finetuning existing RL agents.
  • Necessary Skills: Python, Pytorch
  • Preferred Skills: C++, ROS
  • Grading Info: Depends on participation and efforts.
  • References: General RL framework for bipedal robots: https://www.fracturedplane.com/projects/Cassie_IROS/2018-IROS-cassie.pdf

6. Scalable algorithm for task-based coordination for a swarm of robots [availability: 3/5]
  • Mentor: Jiming Ren, Haris Miller (jren313@gatech.edu, harismiller@gatech.edu)
  • Project Description: This project focuses on coordinating heterogeneous robot teams to fulfill high-level specifications assigned to them. Each team member can perform different types of tasks, and their jobs are allocated based on their capabilities within a global mission.
  • Ideal Team Size: 5
  • Expected Outcomes: Implement algorithms to deploy a team of 12 heterogeneous robots to complete a series of assigned tasks and visualize their operations within a factory simulation.
  • Necessary Skills: C++ and Python, OOP
  • Preferred Skills: Optimization solvers like Gurobi and Mozek, ROS
  • Grading Info: Grading will be based on a mix of results you presented at the weekly meeting and efforts and time you put into the project on a weekly basis.
  • References: –

7. LiDAR/Visual Segmentation, Classification, and Simultaneous Localization and Mapping (SC-SLAM) for Digit Robot [availability: 6/8]
  • Mentor: Wei Zhu, Kasidit Muenprasitivej (wzhu328@gatech.edu)
  • Project Description:
    • 1. Get raw LiDAR/camera sensoring data.
    • 2. Process the data to localize the Digit robot in outdoor environments. The data could be cloudpoints and RGB-D images.
    • 3. Detect obstacles through segmentation and clustering algorithms.
    • 4. Classify the obstacles like pedestrians and bicycles with machine learning tools, such as YoLo.
    • 5. Explore more lightweight algorithms that can be deployed on the on-board PC which has limited hardware resources.
  • Ideal Team Size: 8
  • Expected Outcomes: The algorithms should be deployed on the on-board PC mounted on the Digit robot. Moreover, all algorithms should run at 10Hz at the same time.
  • Necessary Skills: ubuntu, ROS, C++.
  • Preferred Skills: –
  • Grading Info: –
  • References: https://youtu.be/oVxXS5P2Q4w

8. Bipedal Deformable Terrain Research [availability: 2/5]
  • Mentor: Yuntian Zhao (yzhao801@gatech.edu)
  • Project Description: This bipedal deformable terrain research is currently focused on reconfigurable foot design, and need students to cooperate in hardware and mechtronics deisgn, assembly, and testing. When the foot is ready, the experiment will be carried out on the Cassie hardware.
  • Ideal Team Size: 5
  • Expected Outcomes: We expect some journal submission.
  • Necessary Skills: Autodesk Fusion 360 (for mechenics design) [OR] embedded programming skills [AND] Ubuntu [AND] python (for sensor data logging and actuator control)
  • Preferred Skills: –
  • Grading Info: Grading will be based on the effort the VIP student devoted. The effort needed will be clarified during the interview.
  • References: The supplimentary materials will be released after the interview.

9. Multi-Robot Cooperative Loco-Manipulation [availability: 4/5]
  • Mentor: Yuntian Zhao, Ziyi Zhou (yzhao801@gatech.edu, zhouziyi@gatech.edu)
  • Project Description: This project will use a hierarchy approach to solve the multi-robot cooperative loco-manipulation through model-based methods and formal methods. The highest level reference trajectory generation will use formal methodto generative collision-free, kinematics feasible, long horizonal trajectory towrads the goal; while the middle level predictive planner will use simplified dyanmics model (for now, SRBM) to generated a fixed time horizon force input trajectory to track the reference trajectory, while makes sure dynamics feasible; the lowest level controller is a whole-body dynamics reactive controller, which use hierarchy inverse dynamics to command joint motors to track the force input trajectory, while makes sure the robot is still dynamics feasible, and deals with priority issues. The project will have highest planner running in stlpy, while middle level MPC running in OCS2, while lowest WBC modified from legged_control. However, this project will not only aim at two quadruped, but aim at multi-robots, say a humanoid and a quadruped. When the framework is ready in simulation, it will be tested on real hardware, i.e., B1Z1 quadruped w/ robot arm, and Digit.
  • Ideal Team Size: 5
  • Expected Outcomes: This project is expected to have a conference paper submission.
  • Necessary Skills: Please see the preferred skills. If you are not ready for now, you can help with the hardware experiments. It matters more of how much time you devoted into it.
  • Preferred Skills: stlpy/python; OCS2/cpp; MPC based Quadruped and humanoid control; WBC controller for Quadruped and humainoid
  • Grading Info: Grading will be based on the effort the VIP student devoted. The effort needed will be clarified during the interview.
  • References: I prefer to give the list of reading after having the interview.

10. HECTOR Development [availability: 3/6]
  • Mentor: Ziwon Yoon (zyoon6@gatech.edu)
  • Project Description: HECTOR is a versatile bipedal robot platform designed for dynamic locomotion on challenging terrain as granular media. Dynamic Maneuvering on such unfavorable terrain condition needs lots of. 2 for Real-World Experiment and related algorithm development (estimators, standing controller, etc.), 2 for Simulation Development (HW-Sim integration), 2 for Hardware Designs and Improvements (joint calibration gantry, arm attachment))
  • Ideal Team Size: 6
  • Expected Outcomes:
    • 1. Integrated HW-Sim environment for safe testing with Advanced Simulation environments such as Isaac Sim or Mujoco
    • 2. Equipped with necessary functions such as estimators, joint calibration gantry, etc.
  • Necessary Skills: C++, Python, basics of robotics (kinematics, dynamics)
  • Preferred Skills: Simulation experience
  • Grading Info: Average score during the semester will be your final grade (or/and the degree of recommendation if needed later) -> 4/3/2/1/0 = A/B/C/D/F. Start from 4 if you show good performance in weekly update. Participation in experiments and especially satisfactory weekly updates get you +1. Unsatisfactory performance at weekly updates and not replying/attending or being late to announcements/my messages/experiments w/o notice in advance get you -1. If you let me know your issues in advance, that’s totally fine. You can be also excused for the weekly updates with reasonable justifications. These justifications including midterm/final can be up to 4 times in one semester.
  • References: https://gtvault.sharepoint.com/:p:/s/LIDAR/EXRnPa9XCFZEg8v-5PL0UIMBtkg90gGrhP-htBZihO1XhA?e=Hu6508