<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://robot-learning.cs.utah.edu/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="https://robot-learning.cs.utah.edu/feed.php">
        <title>LL4MA lab - project</title>
        <description></description>
        <link>https://robot-learning.cs.utah.edu/</link>
        <image rdf:resource="https://robot-learning.cs.utah.edu/_media/wiki/logo.png" />
       <dc:date>2026-04-14T06:54:51+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="https://robot-learning.cs.utah.edu/project/benchmarking_in_hand_manipulation?rev=1698179930&amp;do=diff"/>
                <rdf:li rdf:resource="https://robot-learning.cs.utah.edu/project/graph_nets?rev=1698179930&amp;do=diff"/>
                <rdf:li rdf:resource="https://robot-learning.cs.utah.edu/project/grasp_active_learning?rev=1698179930&amp;do=diff"/>
                <rdf:li rdf:resource="https://robot-learning.cs.utah.edu/project/grasp_inference?rev=1698179930&amp;do=diff"/>
                <rdf:li rdf:resource="https://robot-learning.cs.utah.edu/project/grasp_type?rev=1698179930&amp;do=diff"/>
                <rdf:li rdf:resource="https://robot-learning.cs.utah.edu/project/grasp_voxel_inference?rev=1698179930&amp;do=diff"/>
                <rdf:li rdf:resource="https://robot-learning.cs.utah.edu/project/in_hand_manipulation?rev=1698179930&amp;do=diff"/>
                <rdf:li rdf:resource="https://robot-learning.cs.utah.edu/project/lfd_tendon_robot?rev=1698179930&amp;do=diff"/>
                <rdf:li rdf:resource="https://robot-learning.cs.utah.edu/project/undergraduate_robotics?rev=1698179930&amp;do=diff"/>
                <rdf:li rdf:resource="https://robot-learning.cs.utah.edu/project/vpn?rev=1698179930&amp;do=diff"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="https://robot-learning.cs.utah.edu/_media/wiki/logo.png">
        <title>LL4MA lab</title>
        <link>https://robot-learning.cs.utah.edu/</link>
        <url>https://robot-learning.cs.utah.edu/_media/wiki/logo.png</url>
    </image>
    <item rdf:about="https://robot-learning.cs.utah.edu/project/benchmarking_in_hand_manipulation?rev=1698179930&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-10-24T20:38:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>benchmarking_in_hand_manipulation</title>
        <link>https://robot-learning.cs.utah.edu/project/benchmarking_in_hand_manipulation?rev=1698179930&amp;do=diff</link>
        <description>Benchmarking In-Hand Manipulation

The purpose of this benchmark is to evaluate the planning and control aspects
of robotic in-hand manipulation systems. The goal is to assess the system's ability to change the pose of a hand-held object by either using the fingers, environment or a combination of both. 
Given an object surface mesh from the YCB data-set, we provide examples of initial and goal states for various in-hand manipulation tasks. 
We further propose metrics that measure the error in r…</description>
    </item>
    <item rdf:about="https://robot-learning.cs.utah.edu/project/graph_nets?rev=1698179930&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-10-24T20:38:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>graph_nets</title>
        <link>https://robot-learning.cs.utah.edu/project/graph_nets?rev=1698179930&amp;do=diff</link>
        <description>Planning for Multi-Object Manipulation with Graph Neural Network Relational Classifiers

Objects rarely sit in isolation in human environments. As such, we’d like our robots to reason about how
multiple objects relate to one another and how those relations
may change as the robot interacts with the world. To this
end, we propose a novel graph neural network framework for
multi-object manipulation to predict how inter-object relations
change given robot actions. Our model operates on partial-view…</description>
    </item>
    <item rdf:about="https://robot-learning.cs.utah.edu/project/grasp_active_learning?rev=1698179930&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-10-24T20:38:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>grasp_active_learning</title>
        <link>https://robot-learning.cs.utah.edu/project/grasp_active_learning?rev=1698179930&amp;do=diff</link>
        <description>Multi-Fingered Active Grasp Learning

Abstract

Learning-based approaches to grasp planning are preferred over analytical methods due to their ability to better generalize to new, partially observed objects. However, data collection remains one of the biggest bottlenecks for grasp learning methods, particularly for multi-fingered hands. The relatively high dimensional configuration space of the hands coupled with the diversity of objects common in daily life requires a significant number of samp…</description>
    </item>
    <item rdf:about="https://robot-learning.cs.utah.edu/project/grasp_inference?rev=1698179930&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-10-24T20:38:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>grasp_inference</title>
        <link>https://robot-learning.cs.utah.edu/project/grasp_inference?rev=1698179930&amp;do=diff</link>
        <description>Planning Multi-Fingered Grasps as Probabilistic Inference in a Learned Deep Network

Abstract

We propose a novel approach to multi-fingered grasp planning leveraging learned deep neural network models. We train a convolutional neural network to predict grasp success as a function of both visual information of an object and grasp configuration. We can then formulate grasp planning as inferring the grasp configuration which maximizes the probability of grasp success. We efficiently perform this i…</description>
    </item>
    <item rdf:about="https://robot-learning.cs.utah.edu/project/grasp_type?rev=1698179930&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-10-24T20:38:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>grasp_type</title>
        <link>https://robot-learning.cs.utah.edu/project/grasp_type?rev=1698179930&amp;do=diff</link>
        <description>Modeling Grasp Type Improves Learning-Based Grasp Planning

Abstract

Different manipulation tasks require different types of grasps. For example, holding a heavy tool like a hammer requires a multi-fingered power grasp offering stability, while holding a pen to write requires a multi-fingered precision grasp to impart dexterity on the object. In this paper, we propose a probabilistic grasp planner that explicitly models grasp type for planning high-quality precision and power grasps in real-tim…</description>
    </item>
    <item rdf:about="https://robot-learning.cs.utah.edu/project/grasp_voxel_inference?rev=1698179930&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-10-24T20:38:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>grasp_voxel_inference</title>
        <link>https://robot-learning.cs.utah.edu/project/grasp_voxel_inference?rev=1698179930&amp;do=diff</link>
        <description>Multi-Fingered Grasp Planning via Inference in Deep Neural Networks

Abstract

We propose a novel approach to multi-fingered grasp
planning leveraging learned deep neural network models. We
train a voxel-based 3D convolutional neural network to predict
grasp success probability as a function of both visual information
of an object and grasp configuration. We can then formulate grasp
planning as inferring the grasp configuration which maximizes
the probability of grasp success. In addition, we le…</description>
    </item>
    <item rdf:about="https://robot-learning.cs.utah.edu/project/in_hand_manipulation?rev=1698179930&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-10-24T20:38:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>in_hand_manipulation</title>
        <link>https://robot-learning.cs.utah.edu/project/in_hand_manipulation?rev=1698179930&amp;do=diff</link>
        <description>In-Hand Manipulation

Summary

Solving the general in-hand manipulation problem using real world robotic hands requires a variety of manipulation skills. We focus on a task that can be solved using in-hand manipulation: in-hand object reposing. We explore methods to repose an object with reference to the palm without dropping the object.</description>
    </item>
    <item rdf:about="https://robot-learning.cs.utah.edu/project/lfd_tendon_robot?rev=1698179930&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-10-24T20:38:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>lfd_tendon_robot</title>
        <link>https://robot-learning.cs.utah.edu/project/lfd_tendon_robot?rev=1698179930&amp;do=diff</link>
        <description>Toward Learning Context-Dependent Tasks from Demonstration for Tendon-Driven Surgical Robots

Introduction

Tendon-driven robots, a type of continuum robot, have the potential to reduce the invasiveness of surgery by enabling access to difficult-to-reach anatomical targets.
In the future, the automation of surgical tasks for these robots may help reduce surgeon strain in the face of a rapidly growing population.
However, directly encoding surgical tasks and their associated context for these rob…</description>
    </item>
    <item rdf:about="https://robot-learning.cs.utah.edu/project/undergraduate_robotics?rev=1698179930&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-10-24T20:38:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>undergraduate_robotics</title>
        <link>https://robot-learning.cs.utah.edu/project/undergraduate_robotics?rev=1698179930&amp;do=diff</link>
        <description>Robotics Courses for CS Undergrads

This page lists some suggested courses for undergraduates in computing to take who are interested in robotics. Some of these are foundational courses that could be good background for robotics courses and research, while others are electives that focus on tools often used in robotics.</description>
    </item>
    <item rdf:about="https://robot-learning.cs.utah.edu/project/vpn?rev=1698179930&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-10-24T20:38:50+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>vpn</title>
        <link>https://robot-learning.cs.utah.edu/project/vpn?rev=1698179930&amp;do=diff</link>
        <description>Test 123</description>
    </item>
</rdf:RDF>
