Table of Contents

Toward Learning Context-Dependent Tasks from Demonstration for Tendon-Driven Surgical Robots

Introduction

Tendon-driven robots, a type of continuum robot, have the potential to reduce the invasiveness of surgery by enabling access to difficult-to-reach anatomical targets. In the future, the automation of surgical tasks for these robots may help reduce surgeon strain in the face of a rapidly growing population. However, directly encoding surgical tasks and their associated context for these robots is infeasible. In this work we take steps toward a system that is able to learn to successfully perform context-dependent surgical tasks by learning directly from a set of expert demonstrations. We present three models trained on the demonstrations conditioned on a vector encoding the context of the demonstration. We then use these models to plan and execute motions for the tendon-driven robot similar to the demonstrations for novel context not seen in the training set. We demonstrate the efficacy of our method on three surgery-inspired tasks.

Publication

  1. "Toward Learning Context-Dependent Tasks from Demonstration for Tendon-Driven Surgical Robots", Yixuan Huang, Michael Bentley, Tucker Hermans, and Alan Kuntz, 2021 International Symposium on Medical Robotics (ISMR) 2021. Best Paper Award Finalist

Here is the corresponding bibtex entry:

@InProceedings{huang-ismr2021-LfD-tendon,
author = {Yixuan Huang and Michael Bentley and Tucker Hermans and Alan Kuntz},
title = {{Toward Learning Context-Dependent Tasks from Demonstration for Tendon-Driven Surgical Robots}},
booktitle = {International Symposium on Medical Robotics (ISMR)},
award = {Best Paper Award Finalist},
url = {https://arxiv.org/abs/2110.07789},
year = 2021
}

Presentation