Technische Universität München Robotics and Embedded Systems
 

JAHIR - Joint-Action for Humans and Industrial Robots

 
CoTeSys

Joint-Action for Humans and Industrial Robots (CoTeSys Project #328)

 

Project Overview

Cotesys7_klein.jpg

JAHIR aims to integrate industrial robots in a human-dominated working area in industrial settings, so that humans and robots can naturally cooperate on a true peer-to-peer level. To enable a cooperation on this level, the project focuses on the observation and understanding of non-verbal communication channels from the human partner based on advanced computer vision algorithms as well as other measurable cues. Unlike in usual fixed set-ups, the environmental conditions in production areas are heavily unconstrained, including crossing workers, moving machines and variable lighting conditions. Therefore, a stable preprocessing of the visual input signals is required to select only objects or image apertures, which are relevant and reasonable for joint action.

 

Cotesys3_klein.jpg

The integration of various sensors, e.g. cameras and force/torque sensors, is needed for interaction scenarios such as the handing over of various parts during the assembly. A comprehensive task knowledge and its general representation is required to make the robot an equal partner in the assembly process based on sophisticated learning and action planing strategies. The robot needs to react to sensor input in real time to avoid collisions, which requires advanced on-line motion and movement planning. Additionally, an intelligent safety system is necessary to ensure the physical security of the human worker, going beyond state-of-the-art systems that slows down the robot or stops it if the human worker comes close to it.

The demonstration platform of JAHIR is integrated in the Cognitive Factory - one of the main demontration set-ups of CoTeSys (Cognition for Technical Systems) between fully automated and fully manual production processes.

The multiple and challenging scientific aspects of JAHIR can only be investigated with a rich set of interdisciplinary research activities. Therefore, electrical and mechanical engineers and computer scientists work together in JAHIR supported by psychologists that have used and use JAHIR as experimental stage.

 

Public Demonstrations

IMG_1770_8.JPGJAHIRSchunkExpertDays_klein.jpg

AUTOMATICA 2008

Schunk.jpg

Schunk Expert Days 2009

The members of JAHIR successfully presented their demonstration platform and research results during several major public events including the AUTOMATICA 2008, Münchener Kolloquium 2008, 1st CoTeSys Workshop for Industry 2008, Münchener Kolloquium 2009, the Schunk Expert Days 2009, and CARV 2009. The JAHIR -demonstrator has moreover been demonstrated at several TUM events and demonstrations for external guests from other research institutes and industry, e.g. Festo, BMW, Schunk, Reis Robotics, KUKA. JAHIR sparked the interest of several industrial partners and paved the way for future cooperations with CoTeSys.

 

Project Progress

Cotesys1_klein.jpg

Throughout the runtime of the JAHIR project, the developed research platform has established itself as an ideal testbed for several scientific approaches as well as a stable showcasing system sparking industrial interest. The functional goal of JAHIR is to bring human workers and industrial robots together so that they can safely share the same physical workspace. By bridging the gap between automated and manual production a novel hybrid assembly situation arises that allows highly adaptable manufacturing skills coupled with high precision.

In order to realize a system controller that enables a safe human-robot-interaction, several reliable real-time capable modules for input and output were integrated in the first steps. By a hand-in-hand collaboration of JAHIR team members with partners from the projects ACIPE, ItrackU and MuDiS several tools have been implemented for (human) worker and desktop surveillance, covering computer vision, depth-map, and laser scanner based perception approaches.

 

Cotesys6_klein.jpg

Furthermore, several appropriate non-standard output modules were mechanically installed on the hardware side as well as integrated to be interfaced from the software side, i.e. an articulated industrial 6-dof robot, a gripper changing system and the connected end effectors (e.g. drilling machine, pneumatic or electrical gripper, or glue gun). Worker guidance is integrated by visualizing information by a tabletop-mounted video projector. Actions on the mechanical side can be measured by a force-torque sensor.

To close the desired Perception-Cognition-Action loop, a cognition based system controller has been designed and implemented interfacing all perception and action components. The underlying data streaming technique is based on the Real-time Database for Cognitive Automobiles expanded by additional communication channels. The current action control strategy of JAHIR is referring to a dialog assisted assembly process. Starting from a finite state machine using the UniMod library, a plan defining all steps of the manufacturing process may either be programmed in advance in form of a JESS-oriented knowledge base or even taught to the system during runtime.

 

Cotesys2_klein.jpg

Another aspect was to enhance safety in the collaboration cell. Different safety sensors were installed in hardware and implemented in software. It is now possible to configure the shared workspace of the human and the robot. The human worker is detected and localized via sensor mats or PMD (Photonic Mixer Device as range map) camera. If the worker is too close to the robot, it will slow down.

Further scientific highlights arising from this project emphasized aspects like efficient, human-adapted robotic motion, safety requirements and psychological studies on human factors during a human-robot-interaction. For more details on these aspects please refer to the publications at the bottom of this page.

 

Videos

The Cognitive Factory Scenario - An Overview Video

The project JAHIR is embedded in the demonstration scenario "The Cognitive Factory". The video on the right shows an overview of the scenario and all integrated projects.

 

Task-based robot controller

Direct physical human-robot interaction has become a central part in the research field of robotics today. To use the advantages of the potential for humans and robots to work together as a team in industrial settings, the most important issues are safety for the human and an easy way to describe tasks for the robot. In the next video, we present an approach of a hierarchical structured control of industrial robots for joint-action scenarios. Multiple atomic tasks including dynamic collision avoidance, operational position, and posture can be combined in an arbitrary order respecting constraints of higher priority tasks. The controller flow is based on the theory of orthogonal projection using nullspaces and constraint least-square optimization.

 

Internal 3D representation

In this video one can see a visualization of the internal 3D representation used in the robotic controller to measure distances for the collision avoidance task. The 3D representation includes static and dynamic objects and is updated according to sensor data. Every sensor module can broadcast its information about the current status of unique identified objects in the workspace.

 

Learning a new building plan and Execution of previously learned building plans

In this videos, we show how our industrial robot is instructed via speech input and that the JAHIR robot is following and executing a collaborative plan that was teached-in previously using multiple input modalities.

 

Kinect enabled robot workspace surveillance

A human co-worker is tracked using a Microsoft Kinect in a human-robot collaborative scenario.

 

Interactive human-robot collaboration with Microsoft Kinect

In this video a human is interacting with the robotic assistive system JAHIR to jointly perform assembly t asks. The human is tracked using a Microsoft Kinect. Virtual Buttons in the menu guided interaction allow a dynamic and adaptive way of controlling and interacting with the robotic system. The overlay video on the upper right shows a close up of the working desk with the on-table projection for the buttons, the virtual environment representation and the menu.

 

Fusing multiple Kinects to survey shared Human-Robot-Workspaces

In today's industrial applications, robots are often strictly separated in space or time. Without surveillance of the joint workspace, a robot is unaware of unforeseen changes in its environment and can not react properly. Since robots are heavy and bulky machines, collisions may have severe consequences for the human worker. Thus, the work area must be monitored to recognize unknown obstacles in it. With the knowledge of its surroundings, the robot can be controlled to avoid collisions with any detected object. to enable direct collaboration between humans and industrial robots. Therefore, the environment is perceived by using multiple, distributed range sensors (Microsoft Kinect). The sensory data sets are decentrally pre-processed and broadcasted via the network. These data sets are then processed by additional components that segment and cluster unknown objects and publish the gained information to allow the system to react to unexpected events in the environment.

 

 

People and Partners

Robotics and Embedded Systems

Human-Machine Communication, Department of Electrical Engineering and Information Technologies

Machine Tools and Industrial Management, Department of Mechanical Engineering

Acknowledgement

This ongoing work is supported by the DFG excellence initiative research cluster Cognition for Technical Systems CoTeSys, see www.cotesys.org for further details.

Publications