Modeling Grasp Type Improves Learning-Based Grasp Planning

Different manipulation tasks require different types of grasps. For example, holding a heavy tool like a hammer requires a multi-fingered power grasp offering stability, while holding a pen to write requires a multi-fingered precision grasp to impart dexterity on the object. In this paper, we propose a probabilistic grasp planner that explicitly models grasp type for planning high-quality precision and power grasps in real-time. We take a learning approach in order to plan grasps of different types for previously unseen objects when only partial visual information is available. Our work demonstrates the first supervised learning approach to grasp planning that can explicitly plan both power and precision grasps for a given object. Additionally, we compare our learned grasp model with a model that does not encode type and show that modeling grasp type improves the success rate of generated grasps. Furthermore we show the benefit of learning a prior over grasp configurations to improve grasp inference with a learned classifier.

Here is the corresponding bibtex entry:

title={{Modeling Grasp Type Improves Learning-Based Grasp Planning}},
author={Lu, Qingkai and Hermans, Tucker},
journal={IEEE Robotics and Automation Letters},

We conduct experiments using the four-fingered, 16 DOF Allegro hand mounted on a Kuka LBR4 arm. We used a Kinect2 camera to generate the point cloud of the object on the table. We performed real-robot grasp experiments on 8 objects spanning different shapes and textures. We compare our grasp planner with a type-free grasp planner to investigate the effects of explicitly modeling grasp type in learning. We additionally compare our grasp planner to a geometry-based heuristic grasp planner, which we also use for initialization of our grasp inference. In total the robot performed 240 different grasps across all experiments.

Examples of successful precision and power grasps: