ROS affordance_learning stack: perception v1.1

I’ve improved the perception module and modified most of the classes considerably to have more manage-able and distributed structure. Here are the major changes:

  • Replaced BoundingBox and Object classes with the arm_navigation_msgs::CollisionObject and Entity type structures, and modified most of modules correspondingly,
  • Volumetric occupancy (volume intersection) based object similarity check routines have been added.
  • Nested/merged box support added which enables us to track objects more reliably
  • id assignment is done irrespective of the pose of the objects
  • overall system performance increase from 3fps to ~15fps thanks to the kinect sensor, and other improvements in the filtering components

Here is a video from one of the tests:

Advertisements

ROS affordance_learning stack: Human object interactions

Lately, I have created a human urdf model which is used to:

  • filter out 3D points corresponding to ones that are on the human partner (for real icub and simulated pr2),
  • obtain human partner’s kinematic body state during an interaction episode (for real icub and simulated pr2),
  • be able to make changes in a simulated robot’s (e.g. pr2) virtual environment by interacting with the objects and also the robot.

The video below shows preliminary tests on this setup. In this video, I try to change the position and/or orientation of the objects in pr2’s environment while -at the same time- extracting relevant features and visualizing them on the rviz or matlab.

All the source code is open source and available in the project’s google code page. Please note that the source code is being frequently changed, and it is not recommended to use current version of the stack. I hope, I will move the close-to-stable components under the trunk directory of the repository in a week or so. Then, these packages can be used more comfortably.

ROS affordance_learning stack: perception v1.0

I’ve finally finished the very first version of perception component of the affordance_learning stack.

Perception module is mainly consisted of perceptors, and each perceptor includes specialized feature_extractor(s) depending on the problem, or salient features that we’ve designed. Finally, there are object(s) that each feature_extractor processes. Objects can be thought as if they are the salient parts of the environment that robot -somehow- extracts from the acquired raw sensory data.

Below is the UML diagram which shows the the slightly earlier version of the architecture to some extent.

Some part of the affordance_learning perception (al::perception) module.

There are several issues that I see in the current system:

  1. OBject identification checks the similarity of the clusters according to their bounding box center change in one perception cycle, that’s in the video below id of the cluster 4 becomes 1 since the object displaced more than a prespecified threshold. This can be improved by using a more complicated similarity check (e.g. one that considers bounding box dimensions, and pose).
  2. SurfaceFeatureExtractor class only extracts surface normals, and the parameters are better be tuned for a more descriptive performance.
  3. Matlab subplots are not shown in order, not really a problem, I’m gonna fix this a few minutes later.
  4. PoseFeatureExtractor should be implemented before going into the learning experiments
  5. Gazebo range camera model doesn’t seem to acquire data from the top sides of the cylinders, this might be a problem if fill-able type affordances are to be learned.

pr2 and ros::gazebo simulator: workspace segmentation (table and tabletop objects)

Preliminary results for workspace extraction seems promising. Tests are done using various ros packages, particularly tabletop_object_detector and tabletop_collision_map_processing.

Table segmentation and object clustering can be done much better by fine-tuning the related parameters.

It is strange though, the cylinder model in gazebo seems to have problems with ray collisions. In some part of the video, cylinder is divided into two during clustering stage since rays directly pass through top of the cylinder.

The main loop seems to be really slow, but this is due to the 3d data acquisition rate, around 1-2 Hz. When the objects are moved, system reacts to the changes quickly, although the code hasn’t been optimized at all.

Swissranger sr4k model is put just across the robot for the time being, but it might be better, in terms of manipulation performance, putting the camera next to/top of the robot, or at least merging the sensori information obtained from the robots on-board 3d sensors. Yet, this is a temporary setup, and we don’t have that sort of sensors on our iCub robot, so I’m not going to deal with those issues.