creators_name: Baillie, Jean-Christophe editors_name: Berthouze, Luc editors_name: Kozima, Hideki editors_name: Prince, Christopher G. editors_name: Sandini, Giulio editors_name: Stojanov, Georgi editors_name: Metta, Giorgio editors_name: Balkenius, Christian type: confpaper datestamp: 2005-04-14 lastmod: 2011-03-11 08:55:49 metadata_visibility: show title: Grounding Symbols in Perception with two Interacting Autonomous Robots ispublished: pub subjects: comp-sci-mach-learn subjects: comp-sci-art-intel subjects: comp-sci-robot full_text_status: public keywords: symbolic representation, symbol grounding, social language learning, autonomous robots, embodied agents abstract: Grounding symbolic representations in perception is a key and difficult issue for artificial intelligence. The ”Talking Heads” experiment (Steels and Kaplan, 2002) explores an interesting coupling between grounding and social learning of language. In the first version of this experiment, two cameras were interacting in a simplified visual environment made of colored shapes on a white board and they developed a shared, grounded lexicon. We present here the beginning of a new experiment which is an extension of the original one with two autonomous robots instead of two cameras and a complex and unconstrained visual environment. We review the difficulties raised specifically by the embodiment of the agents and propose some directions to address these questions. date: 2004 date_type: published volume: 117 publisher: Lund University Cognitive Studies pagerange: 107-110 refereed: TRUE citation: Baillie, Jean-Christophe (2004) Grounding Symbols in Perception with two Interacting Autonomous Robots. [Conference Paper] document_url: http://cogprints.org/4054/1/baillie.pdf