Subtopic Deep Dive
Robotic Grasping
Research Guide
What is Robotic Grasping?
Robotic Grasping encompasses algorithms, perception systems, and control strategies enabling robotic hands to reliably grasp and manipulate objects of varying shapes, sizes, and materials in unstructured environments.
This subtopic integrates vision-based detection, tactile sensing, and learning policies for robust grasp planning. Key works include vision-driven grasping (Saxena et al., 2008, 938 citations) and deep learning with synthetic point clouds (Mahler et al., 2017, 1135 citations). Over 10 high-citation papers from 1985-2017 span classical control to reinforcement learning approaches.
Why It Matters
Robust robotic grasping enables automation in manufacturing pick-and-place tasks (Maciejewski and Klein, 1985) and logistics handling of novel objects (Saxena et al., 2008). Tactile sensors like GelSight improve force estimation for delicate manipulation (Yuan et al., 2017). Deep reinforcement learning policies support asynchronous training for real-world deployment (Gu et al., 2017), reducing human intervention in household robotics.
Key Research Challenges
Grasping Novel Objects
Detecting stable grasps on unseen objects without 3D models remains difficult due to shape and material variability. Saxena et al. (2008) addressed vision-based grasping but struggled with clutter. Mahler et al. (2017) used synthetic data to improve generalization, yet real-world transfer gaps persist.
Tactile Sensing Integration
Combining high-resolution tactile feedback with vision for precise force control challenges sensor fusion. Yuan et al. (2017) introduced GelSight for geometry estimation, but real-time processing limits dynamic tasks. Impedance control aids interaction (Hogan, 1985), though adapting to unknown dynamics is unresolved.
Learning in Unstructured Environments
Training policies for cluttered, dynamic scenes requires vast data and safe exploration. Gu et al. (2017) proposed off-policy RL for manipulation, but sample inefficiency hinders deployment. Imitation learning from demonstrations (Hussein et al., 2017) helps, yet scaling to diverse objects is challenging.
Essential Papers
The coordination of arm movements: an experimentally confirmed mathematical model
Tamar Flash, Neville Hogan · 1985 · Journal of Neuroscience · 4.3K citations
This paper presents studies of the coordination of voluntary human arm movements. A mathematical model is formulated which is shown to predict both the qualitative features and the quantitative det...
Adaptive representation of dynamics during learning of a motor task
Reza Shadmehr, FA Mussa-Ivaldi · 1994 · Journal of Neuroscience · 2.6K citations
We investigated how the CNS learns to control movements in different dynamical conditions, and how this learned behavior is represented. In particular, we considered the task of making reaching mov...
Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates
Shixiang Gu, Ethan Holly, Timothy Lillicrap et al. · 2017 · 1.4K citations
Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcemen...
Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics
Jeffrey Mahler, Jacky Liang, Sherdil Niyaz et al. · 2017 · 1.1K citations
To reduce data collection time for deep learning of robust robotic grasp\nplans, we explore training from a synthetic dataset of 6.7 million point\nclouds, grasps, and analytic grasp metrics genera...
GelSight: High-Resolution Robot Tactile Sensors for Estimating Geometry and Force
Wenzhen Yuan, Siyuan Dong, Edward H. Adelson · 2017 · Sensors · 1.0K citations
Tactile sensing is an important perception mode for robots, but the existing tactile technologies have multiple limitations. What kind of tactile information robots need, and how to use the informa...
Obstacle Avoidance for Kinematically Redundant Manipulators in Dynamically Varying Environments
Anthony A. Maciejewski, Charles A. Klein · 1985 · The International Journal of Robotics Research · 990 citations
The vast majority of work to date concerned with obstacle avoidance for manipulators has dealt with task descriptions in the form ofpick-and-place movements. The added flexibil ity in motion contro...
Minimum-time control of robotic manipulators with geometric path constraints
Kang G. Shin, Neil David McKay · 1985 · IEEE Transactions on Automatic Control · 990 citations
Conventionally, robot control algorithms are divided into two stages, namely, path or trajectory planning and path tracking (or path control). This division has been adopted mainly as a means of al...
Reading Guide
Foundational Papers
Start with Flash and Hogan (1985, 4304 cites) for arm coordination models foundational to grasp planning; Saxena et al. (2008, 938 cites) for vision grasping of novel objects; Hogan (1985) for impedance control in manipulation.
Recent Advances
Study Mahler et al. (2017, 1135 cites) for deep learning grasp planning; Gu et al. (2017, 1425 cites) for RL in manipulation; Yuan et al. (2017, 1007 cites) for tactile advances.
Core Methods
Core techniques: analytic grasp metrics (Mahler et al., 2017), off-policy RL (Gu et al., 2017), vision feature learning (Saxena et al., 2008), and soft tactile sensing (Yuan et al., 2017).
How PapersFlow Helps You Research Robotic Grasping
Discover & Search
Research Agent uses searchPapers to find 'Dex-Net 2.0' by Mahler et al. (2017), then citationGraph reveals 1000+ downstream works on grasp metrics, and findSimilarPapers uncovers tactile extensions like GelSight (Yuan et al., 2017). exaSearch queries 'vision-based robotic grasping novel objects' to surface Saxena et al. (2008) amid 250M+ papers.
Analyze & Verify
Analysis Agent applies readPaperContent to extract grasp success rates from Mahler et al. (2017), verifies claims via verifyResponse (CoVe) against synthetic data benchmarks, and runs PythonAnalysis to plot citation trends or reimplement analytic grasp metrics with NumPy. GRADE grading scores evidence strength for RL policies in Gu et al. (2017).
Synthesize & Write
Synthesis Agent detects gaps in novel object grasping post-Saxena et al. (2008), flags contradictions between classical (Flash and Hogan, 1985) and learning methods, and uses exportMermaid for grasp planning flowcharts. Writing Agent employs latexEditText for methods sections, latexSyncCitations for 10+ papers, and latexCompile to generate submission-ready reviews.
Use Cases
"Reproduce grasp success rates from Dex-Net 2.0 on synthetic data"
Research Agent → searchPapers('Dex-Net 2.0') → Analysis Agent → readPaperContent + runPythonAnalysis (NumPy/matplotlib to plot metrics from Mahler et al. 2017 tables) → researcher gets replicated success rate graphs and statistical summary.
"Write a survey section on vision vs learning-based grasping"
Research Agent → citationGraph(Saxena 2008) → Synthesis Agent → gap detection → Writing Agent → latexEditText + latexSyncCitations(10 papers) + latexCompile → researcher gets LaTeX-formatted section with figures and bibliography.
"Find GitHub repos for GelSight tactile sensor implementations"
Research Agent → searchPapers('GelSight Yuan 2017') → Code Discovery → paperExtractUrls → paperFindGithubRepo → githubRepoInspect → researcher gets verified code links, README summaries, and example tactile data processors.
Automated Workflows
Deep Research workflow scans 50+ papers from Flash and Hogan (1985) to Gu et al. (2017), generating structured reports on grasp evolution with GRADE-scored timelines. DeepScan's 7-step chain verifies RL claims in Gu et al. via CoVe checkpoints and Python reanalysis of dynamics adaptation (Shadmehr and Mussa-Ivaldi, 1994). Theorizer builds hypotheses linking impedance control (Hogan, 1985) to modern tactile grasping.
Frequently Asked Questions
What defines Robotic Grasping?
Robotic Grasping covers algorithms and control for robotic hands to grasp varied objects using perception and learning (Saxena et al., 2008; Mahler et al., 2017).
What are core methods in Robotic Grasping?
Methods include vision-based detection (Saxena et al., 2008), deep learning with synthetic clouds (Mahler et al., 2017), tactile sensing (Yuan et al., 2017), and RL policies (Gu et al., 2017).
What are key papers on Robotic Grasping?
Highest cited: Flash and Hogan (1985, 4304 cites) on arm coordination; Mahler et al. (2017, 1135 cites) on Dex-Net; Saxena et al. (2008, 938 cites) on novel object grasping.
What open problems exist in Robotic Grasping?
Challenges include real-world transfer from synthetic data (Mahler et al., 2017), dynamic clutter handling, and scalable learning for unknown materials beyond GelSight (Yuan et al., 2017).
Research Robot Manipulation and Learning with AI
PapersFlow provides specialized AI tools for Engineering researchers. Here are the most relevant for this topic:
AI Literature Review
Automate paper discovery and synthesis across 474M+ papers
Paper Summarizer
Get structured summaries of any paper in seconds
Code & Data Discovery
Find datasets, code repositories, and computational tools
AI Academic Writing
Write research papers with AI assistance and LaTeX support
See how researchers in Engineering use PapersFlow
Field-specific workflows, example queries, and use cases.
Start Researching Robotic Grasping with AI
Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.
See how PapersFlow works for Engineering researchers
Part of the Robot Manipulation and Learning Research Guide