Research plan 2008-2010

Task 1: Signal processing in sensors, 3D environment perception and modelling methods in GIM context

3D object modelling in unstructured outdoor environments is a challenging problem. Partly obstructed visibility caused by weather conditions or vegetation inhibits the acquisition of a comprehensive view of the shape of objects. Therefore, the robustness to outliers in data segmentation and feature extraction is essential for successful model generation. Cluttered environments also restrict free manoeuvring of the vehicles for collecting measurement data for large-scale map building. As a consequence, it may not be possible to achieve enough overlap between neighbouring 3D measurement sets, such as range scans, to be able to compute, with current methods, the mutual alignment (i.e. registration) between the two data sets. However, by utilizing the symmetry of the objects in the environment the requirement for direct overlap between the data sets can be relaxed for registration computation.

For building a large-scale semantic situation awareness model of the operating environment of robot work machines and/or humans, automatic extraction of object geometry should be utilized to the maximum. In many situations, however, we will face the limitations of automatic methods for object modelling and the human counterpart has to use her/his cognition to aid the modelling process. Human-assisted environment perception and modelling methods will be an important research subject in the future.

Some kind of motion is usually present when a mobile robot or machine is operating. Motion makes it possible to calculate 3D information and even structure on the basis of sequences of passive computer images. An estimated motion field can be utilized e.g. in controlling the movements of the robot or machine itself in order to avoid obstacles and stay on the natural or built path or road. Essential 3D information can be estimated from the motion field in the case of a very complicated environment, without recognizing different objects. Motion vision supplements, in an effective way, laser range-finders in measuring 3D information for controlling movements. In a static situation, “static” computer (colour) vision can supplement laser range-finders in recognizing tasks, so that 3D objects can be recognized and their 3D shape can be measured. The utilization of passive computer vision has been replaced by laser range finders in mobile robotics applications. Research is needed to develop fast enough computer vision algorithms for mobile machine applications. In applications intended for natural outdoor environments, research is also needed to overcome difficulties caused by changes in illumination and the presence of many disturbing objects.

Data association has long been one of the hardest aspects of dynamic sequential sensor processing. Successful data association guarantees a successful estimate, but wrong data association makes the estimate diverge. To fix this problem algorithms have been developed that make data association more robust (JCBB, Maximum Clique search) and that can also tolerate wrong data associations, such as the multi-hypothesis Kalman filter.

It is necessary to use more complex environment models than now. The model clearly has to be hierarchical, supporting topological, metric, and schematic information. This kind of presentation will enable the using and sharing of different levels of information for different entities. In this context sharing is very important. Sharing of information means that now entities can build, update, and use an environment model, even if the entities are different. There are many challenges in the design of this type of model:

  • how to build the model
  • how to share the model
  • how to update the model

Updating the model is related to the dynamic nature of the world. The model should not be static; it should be able to cope with changes in the environment. It should also be capable of adding to itself dynamic objects, such as robots themselves and moving objects.

Expected results: enhanced and new methods for 3D perception and modelling of unstructured and natural environments

Task 2: Fusing 3D laser ranging, passive camera images and inertial sensor information for effective environment perception

Present mobile robot navigation, both indoors and outdoors, is based on a 2D horizontal world. The scanning laser rangefinder has been the most popular sensing device for autonomous work machines and mobile robots. Laser rangefinder technology provides accurate distance information in real time. On the other hand, machine vision using passive cameras produces accurate bearing information. Concurrent use of passive camera images and laser range information can produce a totally different level of environmental sensing. The ability to detect moving objects such as people, cars, and animals in a mobile robot’s environment increases the applicability and safety of autonomous work tasks.

Real-time 3D modelling of the mobile robot’s environment is also necessary in navigation on uneven natural terrain, for example climbing on a hill. The fusion of robot inertial sensor information with camera and laser data is the key technology for environmental modelling.

Expected results: enhanced and new methods for fusing different type of perception data.

Task 3: Development of SLAM methods for outdoor use

The results of sensor processing and perception are used on the upper level in simultaneous localization and mapping (SLAM). The 3D models needed in sensor processing and perception in order to find and recognize objects are more detailed than the object models needed in SLAM, in which the recognized object type and a few numerical parameters, such as dimensions and position, are needed for each object or element of global structure in the resulting map.

SLAM is a very actively researched topic in the area of mobile robotics. The boom started at the beginning of the 2000s, and now there are several sessions on this topic at every robotics conference. This active research has produced numerous solutions to the problem. The basic SLAM problem has been solved using EKF, sequential Monte Carlo, correlation etc., and using a combination of these.

The key flaws in SLAM have been thoroughly identified: data-association or matching, nonlinearity, dimensionality, dynamic world, and multi-entity SLAM. To all these problems there exist some solutions. The future challenge is to make these algorithms applicable for field and service robots. SLAM is just one function of the service robot; it cannot consume all the computing power of the robot and it has to be robust and functional. The key features of field and service robots, from the SLAM point of view, are:

  • they work with humans
  • they work in various dynamic environments
  • they can work in heterogeneous teams

Nonlinearity, in the case of SLAM, means that even if everything is done correctly, it is possible that the estimate does not converge, because of the linearized model. The solutions to this are similar to the dimensionality problem: to make the estimation in separate independent maps. This means that the estimation is done on two levels, local and global. On the local level the local map and position are maintained, and on the global level, the correct sub-map is selected. Finally, we have the multi-robot SLAM. Here the problem is how to share the estimates between the entities and update the estimates globally. The multi-robot SLAM has the same problems as single-robot SLAM, with the added complexity that multiple entities bring to the system. Building up the model is a SLAM problem; all the layers should be in the same relation to everyone, i.e. the topology should be similarly understandable to all, and the frame of reference should be the same for all.

Expected results: enhanced and new methods to facilitate use of SLAM in GIM context

Task 4: Distributed use of MEMS sensors, methods to use internal sensors to indirect sensing of terrain quality or for obstacle probing

Mobile robots moving on uneven terrain require knowledge of full 6 DOF movements. MEMS inertial sensors can be used to obtain this information. However, the drifting of sensor values requires further studies. Inertial sensors also play an important role in laser rangefinder and camera data analysis on uneven terrain. Acceleration sensors provide challenging opportunities to detect the quality of terrain or small obstacles. Also, similar information can be obtained from temporary power consumption in the motion system, such as wheel motor currents when using electrical motors in a mobile vehicle or robot.

Different sources of information can be further fused to obtain a better estimate of the current situation in the motion environment. MEMS sensors will be relatively cheap, so that plenty of them can be used as a sensor network for measuring e.g contact or a collision. The development of self-organizing and fault-tolerant distributed sensor networks and sensor fusion in GIM is important. Affordable MEMS sensor technology, together with Bluetooth communication, multiplies the opportunities to measure new physically distributed information, for example independent wheel axle accelerations.

Expected results: new innovations utilizing MEMS sensor technology to add standard GIM internal sensors