ROS affordance_learning stack: Human object interactions

Lately, I have created a human urdf model which is used to:

  • filter out 3D points corresponding to ones that are on the human partner (for real icub and simulated pr2),
  • obtain human partner’s kinematic body state during an interaction episode (for real icub and simulated pr2),
  • be able to make changes in a simulated robot’s (e.g. pr2) virtual environment by interacting with the objects and also the robot.

The video below shows preliminary tests on this setup. In this video, I try to change the position and/or orientation of the objects in pr2’s environment while -at the same time- extracting relevant features and visualizing them on the rviz or matlab.

All the source code is open source and available in the project’s google code page. Please note that the source code is being frequently changed, and it is not recommended to use current version of the stack. I hope, I will move the close-to-stable components under the trunk directory of the repository in a week or so. Then, these packages can be used more comfortably.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s