Research plan 2008-2010

Task 1: Further development of “the spatial situation awareness concept”

Field and service robots are used for physical work tasks, which are all very location-bound. To fulfil the requirements a robot has to be able to perceive the environment, recognize objects, and exchange position- and object-related semantic information with the operator. The problem is that robots and humans understood the environment in very different ways. Robots act according to accurate numerical coordinates, while humans mainly position themselves roughly relative to known objects, practically never knowing their numerical coordinates.

Common situation awareness is a new concept to help this communication. The environment is modelled into a general map and object database. Now the environmental information can be interpreted and presented in both robot-specific and operator-specific forms, i.e. both parties can get the data in the most natural way. Objects and certain locations and areas have a symbolic name, making possible semantic dialogue between the robot and the operator. For example: “Where is the ball?” “The ball is near the tree.”

Development of the common situation awareness includes environmental modelling, development of a spatial HRI, and spatial dialogue.

Modelling

Both CoE-GIM teams have long experience of both 2D and 3D environmental modelling. The modelling experience is integrated and completed in order to develop methods to generate automatic model processing and object recognition in both natural and human-built environments and easy human-supported object naming. Modelling is based on possible a priori maps and off-/on-line laser scanning/camera imaging of the environment. The data will be pre-processed automatically to a simple 3D environmental model.

Methods to be developed include filtering and fusing the data from different sensors and automatic and semi-automatic recognition of both natural and human-made objects. In semi-automatic recognition the operator names the objects and areas from the preprocessed model. Finally, the system is completed with interpreters, which convert the model for the robot in the form of an occupancy grid or other geometric map and for the operator as a simple geometric or topographic map. The best presentation form for humans is also to be researched. This modelling part will be done partially in the perception research package (RP 6).

Spatial HRI and dialogue

Spatial HRI allows the robot and the operator to exchange environmental information. It includes a GUI (Graphical User Interface) to handle the graphical model information and different special interfaces and methods to point to objects and areas physically, as well as methods to refer to physical objects and areas during the dialogue.

GUIs are always clumsy because of the need for a display (PC) for the operator or the machine. However, in the modelling phase the operator needs to have a graphical model in order to name the objects and associate them with the real environment. During the work task the GUI is especially useful when the distance between the operator and the robot is long. By using the GUI the operator can “telepoint” objects from the model and the robot can also refer directly to the model during the dialogue. The GUI will also include other than spatial parts of the HRI.

The special interfaces are methods to exchange spatial information without GUI in the way which is the most natural for both operator and the robot (matching the cognitions). Typical examples are pointing devices like laser and sticks (visual tracking), preset beacons, and hand-in-hand pointing, where the operator takes the robot’s hand to the target point. These methods, and totally new ideas, are implemented and tested together with spoken dialogue, especially in task teaching.

Expected results: generic methods to exchange spatial information between a robot and an operator, demonstration of “spatial situation awareness” in the INTEGRATOR project (RP9)

Task 2: Command language with symbolic functions for GIM

HRI - like any HMI - should always be as easy and natural as possible for the operator. Speech, gestures, and expressions are the most natural means of human communication and they should be used in HRI also. The Natural Speech and Language Processing research will perhaps finally solve this problem, but at the moment the existing methods are not yet good enough. However, the vocabulary in robot control, task teaching, and supervising is more or less limited and the form of commands and communication can also be limited.

The topic of the task is to develop a command language which utilizes spoken and gesture dialogue supported by a situation awareness model, different interface devices, and a separate (internet) database. Language is targeted for commanding a service robot to perform tasks and teaching it how to do so and includes the possibility of continuous dialogue. Both the robot and the human use speech and gestures for communication. The robot can additionally use simple expressions to indicate its state to the operator. The language will be developed in a generic form and demonstrated in a couple of work tasks with the WorkPartner robot.

Expected results: generic dialogue-based language for service robots, a demonstration of language in the INTEGRATOR project

Task 3: Further development of wearable human-machine interface technology

Even if the robot could mainly be controlled with spoken dialogue, sometimes more detailed and physical control is needed. This control can be pointing (cf. Task 7.1), direct teleoperation, movement teaching, etc. Traditionally, teleoperation has been performed with joysticks, pedals, and steering wheels or, in the case of human-like manipulators or other complicated structures, with special mechanisms. In the case of telepresence these devices have been supported with force or haptic feedback. In cases when the operator is working (outside) with the robot, these traditional methods can be difficult to carry and use.

The new technology provides possibilities to implement new, lightweight, and natural user interfaces. For example the “Torso controller” of the work partner utilizes wire potentiometers and acceleration sensors in order to control both the body and the manipulator of the WorkPartner robot in a very natural way. The topic of this task is to develop new innovative control interfaces based on inertial measurements, myoelectric signals, gaze, etc. Totally new interfacing methods will be developed and integrated with traditional ones in order to get the best possible usability.

Expected results: generic control interfaces for robot control, a demonstration of interfaces in the INTEGRATOR project

Task 4: Case study to develop a cognitive interactive HRI with deep analysis of human cognitive ability

All human-robot interfacing is basically matching human cognition with robotic. Human communication is effective if the partners are on the same cognitive level (as humans more or less are) and they have the same language. Robotic cognition is very well known because we have created it. However, human cognition is still more or less a mystery. It is quite well known WHAT human cognition can do, but HOW does it do it? Despite intensive research the functional principles of human cognition are not known. In order to develop “Cognitive interactive HRI”, robotics researchers have to cooperate more with psychologists.

The objective of this task is to gain a deeper understanding of the capabilities of human cognition in an interfacing situation. Research work is based on testing humans with different robotic interfaces (developed in the above tasks) and work tasks. In particular, spatial cognition and the exchange of spatial information and the teaching situation are within the scope of the task. The work will be carried out in cooperation with psychologists from Helsinki University and it will be active during the whole project, utilizing the results of the above tasks. The final result will be the concept of “cognitive interactive HRI”.

Expected results: concept cognitive interactive HRI, demonstrations of concept interface during the INTEGRATOR project

Task 5: The interaction between the user and the machines. Utilization of the physical working environment and virtual situation awareness model in mission configuration

In this task the focus is on just how the user inputs can best be fed to the control architecture’s different layers. Generally, it can be stated that at least the following inputs can be found: high-level user intentions (i.e. mission goals such as move-around-and-monitor-for-intruder), spatial goals (“route points”, such as go-through-doorX), mission alterations (“intelligent advice” such as go-around-tableY-on-left-side) and tele-autonomy (direct controlling, such as turn-right-90-degrees-and-go-forward). The architectures developed should also allow some highly interactive operations between the user and the robot. The user should be able to introduce tasks even with the aid of a high-level language. The syntax of the language should include commands and objects that are related to the “situation awareness representation”, meaning, in this case, the physical environment, objects in it, and their possible attributes. The objects can be either physical objects or targets for operations focusing on a certain area of the physical world. The situation awareness map is modelled as an object database, which can be illustrated as a 2D map showing the objects in their positions.

Expected results: further definition and development of the user-machine language for GIM that facilitates communication between the entities and mission configuration