Research

Dataset Acquisition

In order to test, compare and validate our perception framework, we acquired a big dataset of different scenes of textureless and deformable objects in the clutter (some examples in Fig. 1).

 

Fig. 1: Some scenes of objects in the clutter acquired with the system depicted in Fig. 2.

 

Fig. 2: The custom support built to mount the FlexSight sensor, a Kinect2 and a Realsense SR300 in the end-effector

Images from about 500 different views have been taken for each scene. The acquisition has been performed using the FlexSight sensor, a Microsoft Kinect 2 and an Intel Realsense SR300 mounted on the end-effector of an industrial manipulator (Fig. 2). We exploited a compact and robust sensors support that we designed and built in order to maximize the manoeuvrability of the robotic arm.

 

 

 

Project Concept

The goal of the FlexSight project (Flexible and Accurate Recognition and Localization System of Deformable Objects for Pick&Place Robots) is to design a perception system based on an integrated smart camera (the FlexSight sensor, FSS) that is able to recognize and localize several types of deformable objects that can be commonly found in many industrial and logistic applications. We refer to a "deformable object" either as an object that can change its shape due to a stress, or that can be obtained by applying a resize operator along one or more directions.

 

The FSS sensor will integrate all the required sensors (depth sensor, stereo RGB camera, ...) and processing units (CPU+GPU) suitable to run the implemented algorithms and it will be one of the first smart camera that explicitly deals with deformable objects: our system will search over a parameters set that not only includes the class and the position of the object, but also its deformation/rescaling parameters.