ROS affordance_learning stack: perception v1.1

I’ve improved the perception module and modified most of the classes considerably to have more manage-able and distributed structure. Here are the major changes:

  • Replaced BoundingBox and Object classes with the arm_navigation_msgs::CollisionObject and Entity type structures, and modified most of modules correspondingly,
  • Volumetric occupancy (volume intersection) based object similarity check routines have been added.
  • Nested/merged box support added which enables us to track objects more reliably
  • id assignment is done irrespective of the pose of the objects
  • overall system performance increase from 3fps to ~15fps thanks to the kinect sensor, and other improvements in the filtering components

Here is a video from one of the tests:

Advertisements

ROS affordance_learning stack: Human object interactions

Lately, I have created a human urdf model which is used to:

  • filter out 3D points corresponding to ones that are on the human partner (for real icub and simulated pr2),
  • obtain human partner’s kinematic body state during an interaction episode (for real icub and simulated pr2),
  • be able to make changes in a simulated robot’s (e.g. pr2) virtual environment by interacting with the objects and also the robot.

The video below shows preliminary tests on this setup. In this video, I try to change the position and/or orientation of the objects in pr2’s environment while -at the same time- extracting relevant features and visualizing them on the rviz or matlab.

All the source code is open source and available in the project’s google code page. Please note that the source code is being frequently changed, and it is not recommended to use current version of the stack. I hope, I will move the close-to-stable components under the trunk directory of the repository in a week or so. Then, these packages can be used more comfortably.

ROS affordance_learning stack: perception v1.0

I’ve finally finished the very first version of perception component of the affordance_learning stack.

Perception module is mainly consisted of perceptors, and each perceptor includes specialized feature_extractor(s) depending on the problem, or salient features that we’ve designed. Finally, there are object(s) that each feature_extractor processes. Objects can be thought as if they are the salient parts of the environment that robot -somehow- extracts from the acquired raw sensory data.

Below is the UML diagram which shows the the slightly earlier version of the architecture to some extent.

Some part of the affordance_learning perception (al::perception) module.

There are several issues that I see in the current system:

  1. OBject identification checks the similarity of the clusters according to their bounding box center change in one perception cycle, that’s in the video below id of the cluster 4 becomes 1 since the object displaced more than a prespecified threshold. This can be improved by using a more complicated similarity check (e.g. one that considers bounding box dimensions, and pose).
  2. SurfaceFeatureExtractor class only extracts surface normals, and the parameters are better be tuned for a more descriptive performance.
  3. Matlab subplots are not shown in order, not really a problem, I’m gonna fix this a few minutes later.
  4. PoseFeatureExtractor should be implemented before going into the learning experiments
  5. Gazebo range camera model doesn’t seem to acquire data from the top sides of the cylinders, this might be a problem if fill-able type affordances are to be learned.

Pioneer 3dx and Swissranger sr4k settings for real system and ros::gazebo simulator

Installation procedures of external dependencies

Installing Packages

  • Install erratic-robot package for libdiffdrive_plugin.so gazebo differential drive controller plugin
sudo apt-get install ros-electric-erratic-robot
  • Install pioneer p3dx platform related package. Note that the gazebo model in this package uses “differential_position2d” controller whereas erratic platform uses “diffdrive_plugin” that is “libdiffdrive_plugin.so”. In fact, prior model does not include any controller. This controller waits for velocity information from a “cmd_vel” named topic as a “geometry_msgs::Twist” message.
sudo apt-get install ros-electric-p2os
  • Checkout swissranger camera description package to folder (my choice is ~/my_tools/ros_tools)
svn co http://ua-ros-pkg.googlecode.com/svn/trunk/arrg/ua_robots/swissranger_camera_description
  • Checkout swissranger_camera package to folder ( again ~/my_tools/ros_tools)
svn co https://code.ros.org/svn/ros-pkg/branches/trunk_cturtle/stacks/camera_drivers_experimental/swissranger_camera

Modifications

    • Edit the file “p2os/p2os_teleop/src/p2os_teleop.cc” so that it includes “sensor_msgs/Joy.h” not “joy/Joy.h”. Also change the related code lines. Besides, edit  the “manifest.xml” file of this package (disable “joy” dependency, and add sensor_msgs dependency)
    • Replace the “controller:differential_position2d” section of the file
      “p2os/p2os_urdf/defs/pioneer3dx.xacro” with the  “controller:diffdrive_plugin” section of the file “erratic_robot/erratic_description/urdf/erratic_base.xacro” copy the kinematic properties of the pioneer3dx platform from the former file.
    • In order to correctly visualize this model on rviz, you should make a few modifications in pioneer3dx xacro files: (This is because gazebo sends transforms between “odom” and “base_footprint” frames,
      and the robot’s transforms are originally sent w.r.t. the “base_link” frame which has no transforms w.r.t. the “base_footprint”. Either you send a transform between “base_link” and “base_footprint” as a fixed joint transformation, or do what is explained below)
      • change “base_link” entries in the file p2os/p2os_urdf/defs/pioneer3dx_wheel.xacro to ${parent}, it is certainly a mistake in this file.
      • change “base_link entries in the file “p2os/p2os_urdf/defs/pioneer3dx.xacro” to “${base_name}” and add “<property name=”base_name” value=”base_footprint”/>” at the very first of the file.
      • In rviz, select “/odom” as the Fixed Frame and “/base_footprint” as the Target Frame.
    • modify swissranger_camera_description and change:
      • visual->color properties to user defined ones (not the erratic-based)
      • rpy property of ${name}_joint to rpy=”0.0 0.0 0.0″ (This is necessary for simulation environment, don’t change it if it to be used in real system (with swissranger_camera package))
      • field of view by referencing the manual of the sr4k.
    • Change  “/dev/ttyS0” in the file “p2os_driver/include/p2os_driver/robot_params.h” to /dev/ttyUSB0
    • change “base_link” to “base_footprint” inside the file “p2os/p2os_driver/src/sip.cpp”
    • modify the parameters in the file “swissranger_camera/launch/sr_eth.launch” according to your application requirements, for instance:
<param name="auto_exposure" value="0" />
<param name="ether_addr" value="192.168.1.2" />
<param name="amp_threshold" value="10" />
<param name="integration_time" value="8" />
<param name="modulation_freq" value="1" />
<param name="frame_id" value="/swissranger_link" />

Troubleshooting

  • If gazebo model isn’t shown correctly in rviz, this might be because of the parameter “/use_sim_time” being set to “true”. In order to guarantee it is always false, add “<param name="/use_sim_time" value="false" />” to your launch file.

ROS Questions

  1. p3dx urdf model isn’t shown correctly in rviz
  2. pioneer 3dx simulation using rviz and stage
  3. real platform and some problems about sending velocity commands

Known Issues

  • the links connected to base_swivel_joint and swivel_hubcap_joint cannot be visualized properly in rviz since these joints’ states aren’t published by gazebo, and robot_state_publisher cannot publish corresponding tf messages.

Tips & other resources

  • p2os tutorial is here.
  • This is useful for writing your own teleoperation node.
  • This is yet another example about a teleoperation node.
  • This explains how to set ports and some other  stuff.
  • You can check if joystick is running correctly, replace “js0” with the proper joystick device name
jstest /dev/input/js0