Seminar by Diego Resende Faria (preparation for Ph.D. defense)
(Supervisors: Prof. Jorge Dias, Prof. Jorge Lobo)
In this thesis we study how humans manipulate everyday objects, and construct a probabilistic representation model for the tasks and objects useful for autonomous grasping and manipulation by robotic hands. An object-centric probabilistic volumetric model is proposed to represent the object shape acquired by in-hand exploration. The object volumetric map is also useful to fuse the multimodal data and map contact regions, and tactile forces during stable grasps. This model is refined by segmenting
the volume into components approximated by superquadrics modeling, and overlaying the contact points used taking into account the task context. A novel approach for object identification by human in-hand exploration of objects is proposed. Different contact points are associated to an object shape, modeled by mixture models, allowing the object identification through the set of hand configurations sed during the in-hand exploration.
The results presented in this thesis show that the in-hand exploration of object is useful to model and represent the object shape allowing its identification by the hand configurations during the exploration. The features extracted from human grasp demonstrations are sufficient to distinguish key patterns that characterize each stage of the manipulation tasks, ranging from simple object displacement, where the same grasp is employed during manipulation (homogeneous manipulation) to more complex interactions such as object reorientation, fine positioning, and sequential in-hand rotation (dexterous manipulation). We have validated our approach of grasp synthesis on a real robotic platform (a dexterous robotic hand). Results show that the segmentation of the object into primitives allows identifying the most suitable regions for grasping based on previous learning. The proposed approach provides suitable grasps, better than more time consuming analytical and geometrical approaches. Learning from human grasp demonstrations along with features extracted from objects is a useful way to endow a robotic dexterous hand with enough skills to autonomously grasp and manipulate novel objects.