Technische Universität München Robotics and Embedded Systems
 

JAST

 

Joint-Action Science and Technology

News

Project overview

The success of the human species critically depends on our extraordinary ability to engage in joint action. Our perceptions, decisions and behaviour are tuned to those of others with whom we share beliefs, intentions and goals and thus form a group.

These insights underlie the motivation of the JAST project to develop jointly-acting autonomous systems that communicate and work intelligently on mutual tasks in dynamic unstructured environments. A goal that is far-reaching beyond studying individual cognitive systems and that will expand the concept of "group" to "human plus artificial agent(s)".

JAST will build cognitive systems that will be "socially aware", which will build trust and confidence in technology and finally, the ultimate tools that will result from the project will be applicable in industry and society.

Intelligent, autonomous agents will be developed, that cooperate and communicate with their peers and with humans while working on a mutual task. Each will be endowed with a vision system, a gripper and a speech recognition/production system, that in cooperative configurations can carry out complex construction tasks. Perceptual modules will be implemented for object recognition and recognition of gestures and actions of the partner (human or robot).

The construction of a "Baufix" toy airplane has been selected as a sample task for this purpose.

This work is supported by the EU FP6 IST Cognitive Systems Integrated Project "JAST" (FP6-003747-IP).

JAST   Sixth Framework Programme   CogSys   European Union

Videos

Overview video

This is a short sample video (unfortunately without sound) intended to give you an impression of our current work in the JAST project.

Robot Vision

This video shows a short capture of the output from the robot vision system. In other words, what the robot sees with its top view camera and what information is passed to the cognitive layer.

 

People

Partners

JAST is an European project; we closely work together with the following partners:

Publications

[1] Aleksandra Kupferberg, Stefan Glasauer, Markus Huber, Markus Rickert, Alois Knoll, and Thomas Brandt. Biological movement increases acceptance of humanoid robots as human partners in motor interaction. AI & Society, 26(4):339-345, 2011. [ DOI | .bib | .pdf ]
[2] Manuel Giuliani, Claus Lenz, Thomas Müller, Markus Rickert, and Alois Knoll. Design principles for safety in human-robot interaction. International Journal of Social Robotics, 2(3):253-274, 2010. [ DOI | .bib | .pdf ]
[3] Manuel Giuliani, Mary Ellen Foster, Amy Isard, Colin Matheson, Jon Oberlander, and Alois Knoll. Situated reference in a hybrid human-robot interaction system. In Proceedings of the 6th International Natural Language Generation Conference (INLG 2010), Dublin, Ireland, 2010. [ .bib | .pdf ]
[4] Mary Ellen Foster, Manuel Giuliani, Amy Isard, Colin Matheson, Jon Oberlander, and Alois Knoll. Evaluating description and reference strategies in a cooperative human-robot dialogue system. In Proceedings of the Twenty-first International Joint Conference on Artificial Intelligence (IJCAI-09), Pasadena, California, 2009. [ .bib | .pdf ]
[5] Mary Ellen Foster, Manuel Giuliani, and Alois Knoll. Comparing objective and subjective measures of usability in a human-robot dialogue system. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP 2009), Singapore, 2009. [ .bib | .pdf ]
[6] Markus Huber, Helmuth Radrich, Cornelia Wendt, Markus Rickert, Alois Knoll, Thomas Brandt, and Stefan Glasauer. Evaluation of a novel biologically inspired trajectory generator in human-robot interaction. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, pages 639-644, Toyama, Japan, 2009. [ DOI | .bib | .pdf ]
[7] Aleksandra Kupferberg, Stefan Glasauer, Markus Huber, Markus Rickert, Alois Knoll, and Thomas Brandt. Video observation of humanoid robot movements elicits motor interference. In Proceedings of the Symposium on New Frontiers in Human-Robot Interaction, Adaptive and Emergent Behaviour and Complex Systems Convention, pages 81-85, Edinburgh, Scotland, 2009. [ .bib | .pdf ]
[8] Claus Lenz, Markus Rickert, Giorgio Panin, and Alois Knoll. Constraint task-based control in industrial settings. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 3058-3063, St. Louis, MO, USA, 2009. [ DOI | .bib | .pdf ]
[9] Thomas Müller and Alois Knoll. Attention driven visual processing for an interactive dialog robot. In Proceedings of the 24th ACM Symposium on Applied Computing, Honolulu, Hawaii, USA, 2009. [ DOI | .bib | .pdf ]
[10] Markus Huber, Markus Rickert, Alois Knoll, Thomas Brandt, and Stefan Glasauer. Human-robot interaction in handing-over tasks. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, pages 107-112, Munich, Germany, August 2008. [ DOI | .bib | .pdf ]
[11] Ellen Gurman Bard, Robin Hill, and Mary Ellen Foster. What tunes accessibility of referring expressions in task-related dialogue? In Proceedings of the 30th Annual Meeting of the Cognitive Science Society (CogSci 2008), Chicago, 2008. [ .bib | .pdf ]
[12] Ellen Gurman Bard, Robin Hill, and Mary Ellen Foster. Who tunes accessibility of referring expressions in task-related dialogue? In Proceedings of the 12th Workshop on the Semantics and Pragmatics of Dialogue (Londial 2008), London, 2008. [ .bib | .pdf ]
[13] Mary Ellen Foster, Manuel Giuliani, Thomas Müller, Markus Rickert, Alois Knoll, Wolfram Erlhagen, Estela Bicho, Nzoji Hipólito, and Luis Louro. Combining goal inference and natural-language dialogue for human-robot joint action. In Proceedings of the International Workshop on Combinations of Intelligent Methods and Applications, European Conference on Artificial Intelligence, Patras, Greece, 2008. [ .bib | .pdf ]
[14] Mary Ellen Foster, Ellen Gurman Bard, Robin L. Hill, Markus Guhe, Jon Oberlander, and Alois Knoll. The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction (HRI 2008), pages 295-302, Amsterdam, 2008. [ DOI | .bib | .pdf ]
[15] Mary Ellen Foster and Colin Matheson. Following assembly plans in cooperative, task-based human-robot dialogue. In Proceedings of the 12th Workshop on the Semantics and Pragmatics of Dialogue (Londial 2008), London, 2008. [ .bib | .pdf ]
[16] Manuel Giuliani and Alois Knoll. Multiml - a general purpose representation language for multimodal human utterances. In Proceedings of the IEEE International Conference on Multimodal Interfaces, ICMI, Chania, Crete, 2008. [ DOI | .bib | .pdf ]
[17] Markus Huber, Claus Lenz, Markus Rickert, Alois Knoll, Thomas Brandt, and Stefan Glasauer. Human preferences in industrial human-robot interactions. In Proceedings of the International Workshop on Cognition for Technical Systems, Munich, Germany, 2008. [ .bib | .pdf ]
[18] Thomas Müller, Claus Lenz, Simon Barner, and Alois Knoll. Accelerating integral histograms using an adaptive approach. In Proceedings of the 3rd International Conference on Image and Signal Processing, Lecture Notes in Computer Science (LNCS), pages 209-217, Cherbourg-Octeville, France, 2008. Springer. [ DOI | .bib | .pdf ]
[19] Thomas Müller, Pujan Ziaie, and Alois Knoll. A wait-free realtime system for optimal distribution of vision tasks on multicore architectures. In Proceedings of the 5th International Conference on Informatics in Control, Automation and Robotics, Funchal, Portugal, 2008. [ .bib | .pdf ]
[20] Thomas Müller and Alois Knoll. Bioinspired early visual processing: The attention condensation mechanism. In Proceedings of the Australasian Conference on Robotics and Automation, Canberra, Australia, 2008. [ .bib | .pdf ]
[21] Thomas Müller and Alois Knoll. Humanoid early visual processing using attention mechanisms. In Proceedings of Cognitive Humanoid Vision WS at IEEE-RAS International Conference on Humanoid Robots, Daejeon, Korea, 2008. [ .bib | .pdf ]
[22] Markus Rickert, Oliver Brock, and Alois Knoll. Balancing exploration and exploitation in motion planning. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 2812-2817, Pasadena, CA, USA, 2008. [ DOI | .bib | .pdf ]
[23] Pujan Ziaie, Thomas Müller, Mary Ellen Foster, and Alois Knoll. A naïve Bayes classifier with distance weighting for hand-gesture recognition. In Proceedings of the 13th International CSI Computer Conference (CSICC 2008), Kish Island, Iran, 2008. [ .bib | .pdf ]
[24] Pujan Ziaie, Thomas Müller, and Alois Knoll. A novel approach to hand-gesture recognition in a human-robot dialog system. In Proceedings of the First Intl. Workshop on Image Processing Theory, Tools & Applications, Sousse, Tunesia, 2008. [ .bib | .pdf ]
[25] Mary Ellen Foster. Enhancing human-computer interaction with embodied conversational agents. In Constantine Stephanidis, editor, Proceedings of the 4th International Conference on Universal Access in Human-Computer Interaction, HCI International, Part II, volume 4555 of Lecture Notes in Computer Science, pages 828-837, Beijing, 2007. Springer. [ DOI | .bib | .pdf ]
[26] Mary Ellen Foster. Roles of a talking head in a cooperative human-robot dialogue system. In Proceedings of the 7th International Conference on Intelligent Virtual Agents (IVA07), Poster session, Paris, 2007. [ DOI | .bib | .pdf ]
[27] Manuel Giuliani and Alois Knoll. Integrating multimodal cues using grammar based models. In Constantine Stephanidis, editor, Proceedings of the 4th International Conference on Universal Access in Human-Computer Interaction, HCI International, Part II, volume 4555 of Lecture Notes in Computer Science, pages 858-867, Beijing, 2007. Springer. [ DOI | .bib | .pdf ]
[28] Markus Rickert, Mary Ellen Foster, Manuel Giuliani, Tomas By, Giorgio Panin, and Alois Knoll. Integrating language, vision and action for human robot dialog systems. In Proceedings of the International Conference on Universal Access in Human-Computer Interaction, HCI International, volume 4555 of Lecture Notes in Computer Science, pages 987-995, Beijing, China, 2007. Springer. [ DOI | .bib | .pdf ]
[29] Mary Ellen Foster. Dialogue management for cooperative, symmetrical human-robot interaction. In Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue (Brandial 2006), Potsdam, 2006. [ .bib | .pdf ]
[30] Mary Ellen Foster. The iCat in the JAST multimodal dialogue system. In Proceedings of the First iCat Workshop, Eindhoven, 2006. [ .bib | .pdf ]
[31] Mary Ellen Foster, Markus Rickert, and Michael Braun. The JAST collaborative human-robot dialogue system. In Proceedings of CogSys II (Poster session), Nijmegen, Netherlands, 2006. [ .bib | .pdf ]
[32] Mary Ellen Foster, Tomas By, Markus Rickert, and Alois Knoll. Human-robot dialogue for joint construction tasks. In Proceedings of the 8th International Conference on Multimodal Interfaces, pages 68-71, Banff, AB, Canada, 2006. [ DOI | .bib | .pdf ]
[33] Mary Ellen Foster, Tomas By, Markus Rickert, and Alois Knoll. Symmetrical joint action in human-robot dialogue. In Proceedings of the Workshop on Intuitive Human-Robot Interaction for Getting the Job Done, Robotics Science and Systems, Philadelphia, 2006. [ .bib | .pdf ]