Yixuan Huang

PhD Student - Computer Science

Email: yixuan.huang@utah.edu
Office: Room 2160, Merrill Engineering, 50 Central Campus Dr, Salt Lake City, UT 84112 (Map)

[ Curriculum Vitae ]

Hi there, I'm Yixuan Huang!

I'm a fourth-year Ph.D. student in the Kahlert School of Computing at the University of Utah. I am fortunate to be advised by Prof. Tucker Hermans. I am currently working on perception, reasoning, and planning for multi-object manipulation tasks. Before coming to Utah, I worked on sample-efficient reinforcement learning and safe autonomous driving at UC San Diego for two years advised by Prof. Sicun Gao. I got my bachelor degree in Computer Science and Engineering from Northeastern University (China) in 2020.

Objects rarely sit in isolation in human environments. As such, we’d like our robots to reason about how multiple objects relate to one another and how those relations may change as the robot interacts with the world. To this end, we propose a novel graph neural network framework for multi-object manipulation to predict how inter-object relations change given robot actions. Our model operates on partial-view point clouds and can reason about multiple objects dynamically interacting during the manipulation. By learning a dynamics model in a learned latent graph embedding space, our model enables multi-step planning to reach target goal relations. We show our model trained purely in simulation transfers well to the real world. Our planner enables the robot to rearrange a variable number of objects with a range of shapes and sizes using both push and pick and place skills.


- "Planning for Multi-Object Manipulation with Graph Neural Network Relational Classifiers ", Yixuan Huang, Adam Conkey, and Tucker Hermans. 2023 International Conference on Robotics and Automation (ICRA) 2023.

Project page Code

- "Latent Space Planning for Multi-Object Manipulation with Environment-Aware Relational Classifiers", Yixuan Huang, Nichols Crawford Taylor, Adam Conkey, Weiyu Liu, Tucker Hermans. Under Review.

Project page

Tendon-driven robots, a type of continuum robot, have the potential to reduce the invasiveness of surgery by enabling access to difficult-to-reach anatomical targets. In the future, the automation of surgical tasks for these robots may help reduce surgeon strain in the face of a rapidly growing population. However, directly encoding surgical tasks and their associated context for these robots is infeasible. In this work we take steps toward a system that is able to learn to successfully perform context-dependent surgical tasks by learning directly from a set of expert demonstrations. We present three models trained on the demonstrations conditioned on a vector encoding the context of the demonstration. We then use these models to plan and execute motions for the tendon-driven robot similar to the demonstrations for novel context not seen in the training set. We demonstrate the efficacy of our method on three surgery-inspired tasks.


- "Toward Learning Context-Dependent Tasks from Demonstration for Tendon-Driven Surgical Robots", Yixuan Huang, Michael Bentley, Tucker Hermans, and Alan Kuntz, 2021 International Symposium on Medical Robotics (ISMR) 2021. Best Paper Award Finalist, Best Student Paper Award Finalist

Project page