Learning Visual Features that Predict Grasp Type and Location

نویسندگان

  • Di Wang
  • Andrew H. Fagg
چکیده

J. J. Gibson suggested that objects in our environment can be represented by an agent in terms of the types of actions that the agent may perform on or with the object. This affordance representation allows the agent to make a connection between the perception of key properties of an object and these actions. In this paper, we explore the automatic construction of visual representations that are associated with components of objects that afford certain types of grasping actions. A training data set of images is labeled with regions corresponding to locations at which certain grasp types could be applied to the object. A classifier is trained to predict whether particular image pixels correspond to these grasp regions. Each pixel that is classified as a positive example of a grasp region votes for its surrounding image region. If there exists a pixel with a large enough number of votes, then the image is considered to afford the grasp and the location of the pixel is identified as the best grasp point. Experimental results show that the approach is capable of identifying the occurrence of both handle-type and ball-type grasp options in images containing novel objects.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Using Visual Features to Predict Successful Grasp Parameters

Visual features act as an important part for hand pre-shaping during human grasp. This paper focuses on using visual features in an image to predict successful grasp types, which can be used in the robot grasp manipulation. The following questions are discussed: First, how to recognize different shapes in a given image. Second, how to train the system using image-grasp pairs. Third, evaluate th...

متن کامل

Learning to grasp using visual information

A scheme for learning to grasp objects using visual information is presented. A system is considered that coordinates a parallel-jaw gripper (hand) and a camera (eye). Given an object, and considering its geometry, the system chooses grasping points, and performs the grasp. The system learns while performing grasping trials. For each grasp we store location parameters that code the locations of...

متن کامل

Learning Visual Features to Predict Hand Orientations

This paper is a preliminary account of current work on a visual system that learns to aid in robotic grasping and manipulation tasks. Localized features are learned of the visual scene that correlate reliably with the orientation of a dextrous robotic hand during haptically guided grasps. On the basis of these features, hand orientations are recommended for future gasping operations. The learni...

متن کامل

How can I , robot , pick up that object with my hand ?

This paper describes a practical approach to the robot grasping problem. An approach that is composed of two different parts. First, a vision-based grasp synthesis system implemented on a humanoid robot able to compute a set of feasible grasps and to execute any of them. This grasping system takes into account gripper kinematics constraints and uses little computational effort. Second, a learni...

متن کامل

Evaluation of demographic, clinical characteristics and type of hydrocephaly before and after surgical interventions in patients with intra ventricular brain tumor surgery

Introduction: Hydrocephaly is a common complication of intra ventricular brain tumors. The aim of this study was to determine, demographic and clinical features and type of hydrocephaly before and after surgical interventions in patients with intra ventricular brain tumor surgery. Material and methods: In a cross-sectional study, 100 patients with intra ventricular brain tumors who were candid...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009