ROS affordance_learning stack: perception v1.1

I’ve improved the perception module and modified most of the classes considerably to have more manage-able and distributed structure. Here are the major changes:

  • Replaced BoundingBox and Object classes with the arm_navigation_msgs::CollisionObject and Entity type structures, and modified most of modules correspondingly,
  • Volumetric occupancy (volume intersection) based object similarity check routines have been added.
  • Nested/merged box support added which enables us to track objects more reliably
  • id assignment is done irrespective of the pose of the objects
  • overall system performance increase from 3fps to ~15fps thanks to the kinect sensor, and other improvements in the filtering components

Here is a video from one of the tests:

Advertisements

ROS affordance_learning stack: Human object interactions

Lately, I have created a human urdf model which is used to:

  • filter out 3D points corresponding to ones that are on the human partner (for real icub and simulated pr2),
  • obtain human partner’s kinematic body state during an interaction episode (for real icub and simulated pr2),
  • be able to make changes in a simulated robot’s (e.g. pr2) virtual environment by interacting with the objects and also the robot.

The video below shows preliminary tests on this setup. In this video, I try to change the position and/or orientation of the objects in pr2’s environment while -at the same time- extracting relevant features and visualizing them on the rviz or matlab.

All the source code is open source and available in the project’s google code page. Please note that the source code is being frequently changed, and it is not recommended to use current version of the stack. I hope, I will move the close-to-stable components under the trunk directory of the repository in a week or so. Then, these packages can be used more comfortably.

ROS affordance_learning stack: perception v1.0

I’ve finally finished the very first version of perception component of the affordance_learning stack.

Perception module is mainly consisted of perceptors, and each perceptor includes specialized feature_extractor(s) depending on the problem, or salient features that we’ve designed. Finally, there are object(s) that each feature_extractor processes. Objects can be thought as if they are the salient parts of the environment that robot -somehow- extracts from the acquired raw sensory data.

Below is the UML diagram which shows the the slightly earlier version of the architecture to some extent.

Some part of the affordance_learning perception (al::perception) module.

There are several issues that I see in the current system:

  1. OBject identification checks the similarity of the clusters according to their bounding box center change in one perception cycle, that’s in the video below id of the cluster 4 becomes 1 since the object displaced more than a prespecified threshold. This can be improved by using a more complicated similarity check (e.g. one that considers bounding box dimensions, and pose).
  2. SurfaceFeatureExtractor class only extracts surface normals, and the parameters are better be tuned for a more descriptive performance.
  3. Matlab subplots are not shown in order, not really a problem, I’m gonna fix this a few minutes later.
  4. PoseFeatureExtractor should be implemented before going into the learning experiments
  5. Gazebo range camera model doesn’t seem to acquire data from the top sides of the cylinders, this might be a problem if fill-able type affordances are to be learned.

iCub simple safety check for end-effector cartesian coordinates

One simple idea to check if the end-effector (hand) collides to robot’s own body is checking the cartesian coordinate of the end effector if it stays inside the bounding box that represents robot’s workspace.

Well, this -obviously- neglects the obstacles in the configuration space, in addition to neglecting the self-collision checking of the arms, but still it helps us to understand if robot is trying to do something stupid (hitting itself in the face for instance ūüôā ).

In order to do this, I have inserted a function inside “iCub/main/src/libraries/icubmod/cartesianController/ServerCartesianController.cpp” so that the generated velocity is checked before being sent (inside “executingTraj” before “ctrl->isInTarget()” if statements ).

//++Kadir
/************************************************************************/
bool ServerCartesianController::isVelocitySafe(const yarp::sig::Vector &v)
{
    bool safe = true;

    if(kinPart == "arm")
    {
        // get current joint positions (feedback)
        Vector q_cur = fb;

        // find intermediate joint angles
        // -constant acceleration assumed
        // -control board is assumed to be run 1000 times more frequent than this thread
        Vector q_inter = q_cur + v*getRate()/1000; 

        // find end effector position via fwdKinematics
        Vector x_inter = chain->EndEffPose(q_inter);

        //for(int i=0;i<x_inter.size();i++)
        //    printf("%f ", x_inter[i]);
        //printf("\n");

        if(kinType=="left")
        {
            safe= 
                (x_inter[0]<-0.15) && (x_inter[0]>-0.5) && 
                (x_inter[1]<0.05) && (x_inter[1]>-0.4) && 
                (x_inter[2]<0.3) && (x_inter[2]>-0.10);
            if(!safe)
                printf( "!WARNING: joint vals for left arm are not safe\n");
        }
        else if(kinType=="right")
        {
            safe= 
                (x_inter[0]<-0.15) && (x_inter[0]>-0.5) && 
                (x_inter[1]>-0.05) && (x_inter[1]<0.4) && 
                (x_inter[2]<0.3) && (x_inter[2]>-0.10);
            if(!safe)
                printf( "!WARNING: joint vals for right arm are not safe\n");
        }
        else
            printf( "Wrong end effector type, try LEFT or RIGHT\n");
    }
    return safe;
}
//--Kadir

This function is then called inside “run” function:

//++Kadir
//check if the iterated joint velocities are safe.
// --send them if it is safe, otherwise stop the limb
if(isVelocitySafe(ctrl->get_qdot()))
    // send joints velocities to the robot [deg/s]
  sendVelocity(CTRL_RAD2DEG*ctrl->get_qdot());
else
  stopLimbVel();
//--Kadir

Moduler Cartesian solver/controller diagram

This safety check takes the role just before sending the q_dot joint velocities by evaluating if the corresponding velocity request takes the arm outside the predefined workspace. In that case, velocity values zeroed.

pr2 and ros::gazebo simulator: workspace segmentation (table and tabletop objects)

Preliminary results for workspace extraction seems promising. Tests are done using various ros packages, particularly tabletop_object_detector and tabletop_collision_map_processing.

Table segmentation and object clustering can be done much better by fine-tuning the related parameters.

It is strange though, the cylinder model in gazebo seems to have problems with ray collisions. In some part of the video, cylinder is divided into two during clustering stage since rays directly pass through top of the cylinder.

The main loop seems to be really slow, but this is due to the 3d data acquisition rate, around 1-2 Hz. When the objects are moved, system reacts to the changes quickly, although the code hasn’t been optimized at all.

Swissranger sr4k model is put just across the robot for the time being, but it might be better, in terms of manipulation performance, putting the camera next to/top of the robot, or at least merging the sensori information obtained from the robots on-board 3d sensors. Yet, this is a temporary setup, and we don’t have that sort of sensors on our iCub robot, so I’m not going to deal with those issues.

iCub Simulator: changing table friction properties

We’ve changed the surface characteristics of the table model which comes built-in with the ODE based iCub simulator. I’m sure there would be more elegant solutions, but this hack seems to suffice for our application.

Following code snippet should be inserted after the following line in the file (“iCub/main/src/simulators/iCubSimulation/odesdl/iCub_Sim.cpp”):

if (b1 && b2 && dAreConnectedExcluding (b1,b2,dJointTypeContact)) return;

Code to add:

//++ Kadir && Guner
 const dReal* pos_o1;
 const dReal* pos_o2;
 if (dGeomGetClass(o1) != dPlaneClass && dGeomGetClass(o2) != dPlaneClass)
 {
     pos_o1 = dGeomGetPosition(o1);
     pos_o2 = dGeomGetPosition(o2);
 }
 bool table_collision_point = false;
 if( (fabs(pos_o1[1]-0.5) < 0.01 && fabs(pos_o1[2]-0.45) < 0.01 && fabs(pos_o1[3]) < 0.01 ) {
    // this is the table top, change the surface mu to an appropriate value
    // so that it can be used accordingly when dCollide is called
     table_collision_point = true;
 }
 else if( (fabs(pos_o2[1]-0.5) < 0.01 && fabs(pos_o2[2]-0.45) < 0.01 && fabs(pos_o2[3])< 0.01 )
 {
     // this is the table top, change the surface mu to an appropriate value
     // so that it can be used accordingly when dCollide is called
     table_collision_point = true;
 }
 dContact contact[MAX_CONTACTS]; // up to MAX_CONTACTS contacts per box-box
 for (i=0; i contact[i].surface.mode = dContactSlip1| dContactSlip2| dContactBounce | dContactSoftCFM;
 if(table_collision_point)
    contact[i].surface.mu = 10.0;
 else
    contact[i].surface.mu = dInfinity;
 contact[i].surface.mu2 = 0;
 contact[i].surface.bounce = 0.01;
 contact[i].surface.bounce_vel = 0.01;
 contact[i].surface.slip1 = (dReal)0.000001;
 contact[i].surface.slip2 = (dReal)0.000001;
 contact[i].surface.soft_cfm = 0.0001;
 }
 //-- Kadir && Guner

Pioneer 3dx and Swissranger sr4k settings for real system and ros::gazebo simulator

Installation procedures of external dependencies

Installing Packages

  • Install erratic-robot package for libdiffdrive_plugin.so gazebo differential drive controller plugin
sudo apt-get install ros-electric-erratic-robot
  • Install pioneer p3dx platform related package. Note that the gazebo model in this package uses “differential_position2d” controller whereas erratic platform uses “diffdrive_plugin” that is “libdiffdrive_plugin.so”. In fact, prior model does not include any controller. This controller waits for velocity information from a “cmd_vel” named topic as a “geometry_msgs::Twist” message.
sudo apt-get install ros-electric-p2os
  • Checkout swissranger camera description package to folder (my choice is ~/my_tools/ros_tools)
svn co http://ua-ros-pkg.googlecode.com/svn/trunk/arrg/ua_robots/swissranger_camera_description
  • Checkout swissranger_camera package to folder ( again ~/my_tools/ros_tools)
svn co https://code.ros.org/svn/ros-pkg/branches/trunk_cturtle/stacks/camera_drivers_experimental/swissranger_camera

Modifications

    • Edit the file “p2os/p2os_teleop/src/p2os_teleop.cc” so that it includes “sensor_msgs/Joy.h” not “joy/Joy.h”. Also change the related code lines. Besides, edit¬† the “manifest.xml” file of this package (disable “joy” dependency, and add sensor_msgs dependency)
    • Replace the “controller:differential_position2d” section of the file
      “p2os/p2os_urdf/defs/pioneer3dx.xacro” with the¬† “controller:diffdrive_plugin” section of the file “erratic_robot/erratic_description/urdf/erratic_base.xacro” copy the kinematic properties of the pioneer3dx platform from the former file.
    • In order to correctly visualize this model on rviz, you should make a few modifications in pioneer3dx xacro files: (This is because gazebo sends transforms between “odom” and “base_footprint” frames,
      and the robot’s transforms are originally sent w.r.t. the “base_link” frame which has no transforms w.r.t. the “base_footprint”. Either you send a transform between “base_link” and “base_footprint” as a fixed joint transformation, or do what is explained below)
      • change “base_link” entries in the file p2os/p2os_urdf/defs/pioneer3dx_wheel.xacro to ${parent}, it is certainly a mistake in this file.
      • change “base_link entries in the file “p2os/p2os_urdf/defs/pioneer3dx.xacro” to “${base_name}” and add “<property name=”base_name” value=”base_footprint”/>” at the very first of the file.
      • In rviz, select “/odom” as the Fixed Frame and “/base_footprint” as the Target Frame.
    • modify swissranger_camera_description and change:
      • visual->color properties to user defined ones (not the erratic-based)
      • rpy property of ${name}_joint to rpy=”0.0 0.0 0.0″ (This is necessary for simulation environment, don’t change it if it to be used in real system (with swissranger_camera package))
      • field of view by referencing the manual of the sr4k.
    • Change¬† “/dev/ttyS0” in the file “p2os_driver/include/p2os_driver/robot_params.h” to /dev/ttyUSB0
    • change “base_link” to “base_footprint” inside the file “p2os/p2os_driver/src/sip.cpp”
    • modify the parameters in the file “swissranger_camera/launch/sr_eth.launch” according to your application requirements, for instance:
<param name="auto_exposure" value="0" />
<param name="ether_addr" value="192.168.1.2" />
<param name="amp_threshold" value="10" />
<param name="integration_time" value="8" />
<param name="modulation_freq" value="1" />
<param name="frame_id" value="/swissranger_link" />

Troubleshooting

  • If gazebo model isn’t shown correctly in rviz, this might be because of the parameter “/use_sim_time” being set to “true”. In order to guarantee it is always false, add “<param name="/use_sim_time" value="false" />” to your launch file.

ROS Questions

  1. p3dx urdf model isn’t shown correctly in rviz
  2. pioneer 3dx simulation using rviz and stage
  3. real platform and some problems about sending velocity commands

Known Issues

  • the links connected to base_swivel_joint and swivel_hubcap_joint cannot be visualized properly in rviz since these joints’ states aren’t published by gazebo, and robot_state_publisher cannot publish corresponding tf messages.

Tips & other resources

  • p2os tutorial is here.
  • This is useful for writing your own teleoperation node.
  • This is yet another example about a teleoperation node.
  • This explains how to set ports and some other¬† stuff.
  • You can check if joystick is running correctly, replace “js0” with the proper joystick device name
jstest /dev/input/js0