<?xml version="1.0" encoding="utf-8" ?>
<feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"
	xmlns:xhtml="http://www.w3.org/1999/xhtml"
	xmlns:sword="http://purl.org/net/sword/"
>
<title>Cogprints: No conditions. Results ordered -Date, Title. </title>
<link rel="alternate" href="http://cogprints.org/"/>
<updated>2018-01-17T14:24:09Z</updated>
<generator uri="http://www.eprints.org/" version="3.3.10">EPrints</generator>
<logo>http://cogprints.org/images/sitelogo.gif</logo>
<id>http://cogprints.org/</id>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/8716/Atom/cogprints-eprint-8716.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/8716"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/8716/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/8716/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/8716"/>
  <published>2012-11-09T19:59:50Z</published>
  <updated>2013-02-18T15:13:20Z</updated>
  <id>http://cogprints.org/id/eprint/8716</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/8716"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/8716</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/8716">
    <sword:depositedOn>2012-11-09T19:59:50Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Turing Test, Chinese Room Argument, Symbol Grounding Problem. Meanings in Artificial Agents</title>
  <summary type="xhtml">The Turing Test (TT), the Chinese Room Argument (CRA), and the Symbol Grounding Problem (SGP) are about the question “can machines think?”. We propose to look at that question through the capability for Artificial Agents (AAs) to generate meaningful information like humans. We present TT, CRA and SGP as being about generation of human-like meanings and analyse the possibility for AAs to generate such meanings. We use for that the existing Meaning Generator System (MGS) where a system submitted to a constraint generates a meaning in order to satisfy its constraint. Such system approach allows comparing meaning generation in animals, humans and AAs. The comparison shows that in order to design AAs capable of generating human-like meanings, we need the possibility to transfer human constraints to AAs. That requirement raises concerns coming from the unknown natures of life and human consciousness which are at the root of human constraints. Corresponding implications for the TT, the CRA and the SGP are highlighted. The usage of the MGS shows that designing AAs capable of thinking and feeling like humans needs an understanding about the natures of life and human mind that we do not have today. Following an evolutionary approach, we propose as a first entry point an investigation about extending life to AAs in order to design AAs carrying a “stay alive” constraint.&#13;
Ethical concerns are raised from the relations between human constraints and human values.&#13;
Continuations are proposed.</summary>
  <author>
    <name>Mr Christophe Menant</name>
    <email>christophe.menant@hotmail.fr</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/8015/Atom/cogprints-eprint-8015.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/8015"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/8015/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/8015/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/8015"/>
  <published>2012-11-09T19:23:18Z</published>
  <updated>2012-11-09T19:23:18Z</updated>
  <id>http://cogprints.org/id/eprint/8015</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/8015"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/8015</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/8015">
    <sword:depositedOn>2012-11-09T19:23:18Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">An evolutionary behavioral model for decision making</title>
  <summary type="xhtml">For autonomous agents the problem of deciding what to do next becomes increasingly complex when acting in unpredictable and dynamic environments pursuing multiple and possibly conflicting goals. One of the most relevant behavior-based model that tries to deal with this problem is the one proposed by Maes, the Bbehavior Network model. This model proposes a set of behaviors as purposive perception-action units which are linked in a nonhierarchical network, and whose behavior selection process is orchestrated by spreading activation dynamics. In spite of being an adaptive model (in the sense of self-regulating its own behavior selection process), and despite the fact that several extensions have been proposed in order to improve the original model adaptability, there is not a robust model yet that can self-modify adaptively both the topological structure and the functional purpose&#13;
of the network as a result of the interaction between the agent and its environment. Thus, this work proffers an innovative hybrid model driven by gene expression programming, which makes two main contributions: (1) given an initial set of meaningless and unconnected units, the evolutionary mechanism is able to build well-defined and robust behavior networks which are adapted and specialized to concrete internal agent's needs and goals; and (2)&#13;
the same evolutionary mechanism is able to assemble quite&#13;
complex structures such as deliberative plans (which operate in the long-term) and problem-solving strategies.</summary>
  <author>
    <name>Dr Oscar Javier Romero Lopez</name>
    <email>ojrlopez@hotmail.com</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/7284/Atom/cogprints-eprint-7284.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/7284"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/7284/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/7284/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/7284"/>
  <published>2011-05-02T17:12:27Z</published>
  <updated>2011-05-02T17:12:27Z</updated>
  <id>http://cogprints.org/id/eprint/7284</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/7284"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/7284</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/7284">
    <sword:depositedOn>2011-05-02T17:12:27Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Intelligent Agents in Military, Defense and Warfare: Ethical Issues and Concerns</title>
  <summary type="xhtml">Due to tremendous progress in digital electronics now intelligent and autonomous agents are gradually being adopted into the fields and domains of the military, defense and warfare. This paper tries to explore some of the inherent ethical issues, threats and some remedial issues about the impact of such systems on human civilization and existence in general. This paper discusses human ethics in contrast to machine ethics and the problems caused by non-sentient agents. A systematic study is made on paradoxes regarding the long-term advantages of such agents in military combat. This paper proposes an international standard which could be adopted by all nations to bypass the adverse effects and solve ethical issues of such intelligent agents.</summary>
  <author>
    <name>Mr. Sahon Bhattacharyya</name>
    <email>sahon.dgro@acm.org</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/7335/Atom/cogprints-eprint-7335.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/7335"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/7335/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/7335/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/7335"/>
  <published>2011-05-04T02:33:08Z</published>
  <updated>2011-05-04T02:33:08Z</updated>
  <id>http://cogprints.org/id/eprint/7335</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/7335"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/7335</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/7335">
    <sword:depositedOn>2011-05-04T02:33:08Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Doing, Feeling, Meaning And Explaining</title>
  <summary type="xhtml">It is “easy” to explain doing, “hard” to explain feeling. Turing has set the agenda for the easy explanation (though it will be a long time coming). I will try to explain why and how explaining feeling will not only be hard, but impossible. Explaining meaning will prove almost as hard because meaning is a hybrid of know-how and what it feels like to know how. </summary>
  <author>
    <name>Stevan Harnad</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/7334/Atom/cogprints-eprint-7334.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/7334"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/7334/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/7334/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/7334"/>
  <published>2011-05-04T01:53:25Z</published>
  <updated>2011-05-04T01:53:25Z</updated>
  <id>http://cogprints.org/id/eprint/7334</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/7334"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/7334</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/7334">
    <sword:depositedOn>2011-05-04T01:53:25Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Minds, Brains and Turing</title>
  <summary type="xhtml">Turing set the agenda for (what would eventually be called) the cognitive sciences. He said, essentially, that cognition is as cognition does (or, more accurately, as cognition is capable of doing): Explain the causal basis of cognitive capacity and you’ve explained cognition. Test your explanation by designing a machine that can do everything a normal human cognizer can do – and do it so veridically that human cognizers cannot tell its performance apart from a real human cognizer’s – and you really cannot ask for anything more. Or can you? Neither Turing modelling nor any other kind of computational r dynamical modelling will explain how or why cognizers feel.</summary>
  <author>
    <name>Stevan Harnad</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/7961/Atom/cogprints-eprint-7961.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/7961"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/7961/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/7961/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/7961"/>
  <published>2012-11-09T17:47:35Z</published>
  <updated>2012-11-09T17:47:35Z</updated>
  <id>http://cogprints.org/id/eprint/7961</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/7961"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/7961</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/7961">
    <sword:depositedOn>2012-11-09T17:47:35Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Representational information: a new general notion and measure&#13;
of information</title>
  <summary type="xhtml">In what follows, we introduce the notion of representational information (information conveyed by sets of dimensionally deﬁned objects about their superset of origin) as well as an&#13;
original deterministic mathematical framework for its analysis and measurement. The framework, based in part on categorical invariance theory [Vigo, 2009], uniﬁes three key constructsof universal science – invariance, complexity, and information. From this uniﬁcation we deﬁne the amount of information that a well-deﬁned set of objects R carries about its ﬁnite superset of origin S, as the rate of change in the structural complexity of S (as determined by its degree of categorical invariance), whenever the objects in R are removed from the set S. The measure captures deterministically the signiﬁcant role that context and category structure play in determining the relative quantity and quality of subjective information conveyed by particular objects in multi-object stimuli.</summary>
  <author>
    <name>Professor Ronaldo Vigo</name>
    <email>vigo@ohio.edu</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/7030/Atom/cogprints-eprint-7030.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/7030"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/7030/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/7030/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/7030"/>
  <published>2011-02-16T19:48:55Z</published>
  <updated>2011-03-11T08:57:45Z</updated>
  <id>http://cogprints.org/id/eprint/7030</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/7030"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/7030</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/7030">
    <sword:depositedOn>2011-02-16T19:48:55Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Folge dem weißen Kaninchen –Ein Roboterhase als Vokabeltrainer</title>
  <summary type="xhtml">Im Folgenden werden die Ergebnisse einer Fallstudie zur Evaluation der user experience und&#13;
Motivationsfunktion eines Vokabeltrainers in Form eines Roboterhasen präsentiert. Die Ergebnisse&#13;
zeigen, dass die Schüler einer fünften Klasse, die mit dem Hasen lernten, die Anwendung sowohl&#13;
hinsichtlich des Ease of Use und der Perceived Usefulness als hoch einschätzten, als auch die&#13;
hedonische und pragmatische Qualität des Roboterhasen Nabaztag als hoch bewertet wurde. Zudem&#13;
waren die Schüler, die mit dem Roboter lernten anschließend in einer positiveren Stimmung als die, die&#13;
nach der traditionellen Methode lernten. Nach einer Woche konnte bei den Schülern, die mit dem&#13;
Nabaztag gelernt hatten, eine durchschnittlich höhere Zahl an erinnerten Vokabeln festgestellt werden&#13;
als bei der Kontrollgruppe. Die Tatsache, dass die Schüler nicht nur Bereitschaft zeigten, den Nabaztag&#13;
erneut zu benutzen, sondern ihn auch an Freunde weiterempfehlen würden, zeigt, zusammen mit den&#13;
Ergebnissen der erfassten Skalen, dass die Applikation eine grundlegende Voraussetzung für die&#13;
Entstehung von Motivation und Nutzungsspaß erfüllt.&#13;
</summary>
  <author>
    <name>Mr. Lucas Carstens</name>
    <email>lucas.carstens@stud.uni-due.de</email>
  </author>
  <author>
    <name>Mr. Ulrich Schächtle</name>
    <email>ulrich.schächtle@stud.uni-due.de</email>
  </author>
  <author>
    <name>Mrs. Clarissa M. Salisbury</name>
    <email>clarissa.salisbury@stud.uni-due.de</email>
  </author>
  <author>
    <name>Mrs. Sabrina C. Eimler</name>
    <email>sabrina.eimler@uni-due.de</email>
  </author>
  <author>
    <name>Mrs. Astrid M. von der Pütten</name>
    <email>astrid.von-der-putten@uni-due.de</email>
  </author>
  <author>
    <name>Prof.Dr. Nicole C. Krämer</name>
    <email>nicole.kraemer@uni-due.de</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/7181/Atom/cogprints-eprint-7181.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/7181"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/7181/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/7181/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/7181"/>
  <published>2011-02-16T19:48:48Z</published>
  <updated>2011-03-11T08:57:50Z</updated>
  <id>http://cogprints.org/id/eprint/7181</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/7181"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/7181</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/7181">
    <sword:depositedOn>2011-02-16T19:48:48Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Lass die Ohren nicht hängen! Eine Studie zur Wirkung der Ohrensprache eines Kommunikationsroboters in Hasenform</title>
  <summary type="xhtml">Im Folgenden werden die Ergebnisse einer Studie über die Wirkung des nonverbalen Verhaltens des Kommunikationsroboters Nabaztag und deren Implikation für das Nutzererleben dargestellt. 100  Probanden in Deutschland wurden Fotos des Hasen gezeigt, auf denen dieser verschiedene Ohrenpositionen hatte. Die Ergebnisse zeigen, dass mit den Positionen der Ohren, dem Hasen verschiedene Gefühlszustände zugeschrieben werden. Diese Ergebnisse sind vor dem Hintergrund eines gezielten matchings zwischen verbalen und nonverbalen Kommunikationsinhalten, aber auch für ein stimmiges Nutzererleben, und damit für den Nutzungsspaß, von essentieller Bedeutung.</summary>
  <author>
    <name>Mrs. Sabrina C. Eimler</name>
    <email>sabrina.eimler@uni-due.de</email>
  </author>
  <author>
    <name>Mrs. Tina Ganster</name>
    <email>tina.ganster@stud.uni-due.de</email>
  </author>
  <author>
    <name>Mrs. Astrid M. von der Pütten</name>
    <email>astrid.von-der-putten@uni-due.de</email>
  </author>
  <author>
    <name>Prof.Dr. Nicole C. Krämer</name>
    <email>nicole.kraemer@uni-due.de</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/7182/Atom/cogprints-eprint-7182.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/7182"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/7182/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/7182/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/7182"/>
  <published>2011-02-16T19:48:28Z</published>
  <updated>2011-03-11T08:57:50Z</updated>
  <id>http://cogprints.org/id/eprint/7182</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/7182"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/7182</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/7182">
    <sword:depositedOn>2011-02-16T19:48:28Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Closing and Closure in Human-Companion Interactions: Analyzing Video Data from a Field Study</title>
  <summary type="xhtml">A field study with a simple robotic companion is being undertaken in three iterations in the framework of a EU FP7 research project. The interest of this study lies in its design: the robotic interface setup is installed in the subjects' homes and video data are collected during ten days. This gives the rare opportunity to study the development of human-robot relationships over time, and the integration of companion technologies into everyday life. This paper outlines the qualitative inductive approach to data analysis, and discusses selected results. The focus here is on the interactional mechanisms of bringing conversations to an end. The paper distinguishes between "closing" as the conversational mechanism for doing this, and "closure" as the social norm that motivates it. We argue that this distinction is relevant for interaction designers insofar as they have to be aware of the compelling social norms that are invoked by a companion's conversational behaviour.</summary>
  <author>
    <name>Sabine Payr</name>
    <email>Sabine.Payr@ofai.at</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/6863/Atom/cogprints-eprint-6863.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/6863"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/6863/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/6863/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/6863"/>
  <published>2010-07-01T01:19:19Z</published>
  <updated>2011-03-11T08:57:37Z</updated>
  <id>http://cogprints.org/id/eprint/6863</id>
  <category term="techreport" label="Departmental Technical Report" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/6863"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/6863</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/6863">
    <sword:depositedOn>2010-07-01T01:19:19Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Companions, Virtual Butlers, Assistive Robots: Empirical and Theoretical Insights for Building Long-Term Social Relationships. </title>
  <summary type="xhtml">Robots and agents are becoming increasingly prominent in everyday life, e.g. as companions, user interfaces to smart homes, household robots, or for lifestyle reassurance. In these roles, they have to interact with their users in a complex social world, and must build and maintain long-term relationships with them. A symposium at EMCSR 2010 dealt with theoretical and empirical research on long-term relationships of humans with humans, animals, and machines that show complex interactive behaviours, and with methodologies to create knowledge about interaction with companions, virtual butlers and assistive robots. This technical report brings together the five papers presented at this symposium.</summary>
  <author>
    <name>Dirk Heylen</name>
    <email>d.k.j.Heylen@ewi.utwente.nl</email>
  </author>
  <author>
    <name>Brigitte Krenn</name>
    <email>Brigitte.Krenn@ofai.at</email>
  </author>
  <author>
    <name>Sabine Payr</name>
    <email>Sabine.Payr@ofai.at</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/6818/Atom/cogprints-eprint-6818.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/6818"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/6818/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/6818/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/6818"/>
  <published>2010-04-04T16:15:28Z</published>
  <updated>2011-03-11T08:57:36Z</updated>
  <id>http://cogprints.org/id/eprint/6818</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/6818"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/6818</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/6818">
    <sword:depositedOn>2010-04-04T16:15:28Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Grounding of Meaning in Sensori-Motor Process</title>
  <summary type="xhtml">There is an increasing agreement in the cognitive sciences community that our sensations are closely related to our actions. Our actions impact our sensations from the environment and the knowledge we have of it. Cognition is grounded in sensori-motor coordination. &#13;
In the perspective of implementing such a performance in artificial systems, there is a need for a model of sensori-motor coordination.&#13;
We propose here such a model as based on the generation of meaningful information by a system submitted to a constraint [1]. Systems and agents have constraints to satisfy which are related to their nature (stay alive for an organism, avoid obstacle for a robot, …). We propose here to use an existing meaning generation process where a system submitted to a constraint generates a meaningful information (a meaning) when it receives an information that has a connection with the constraint [2]. The generated meaning is precisely the connection existing between the received information and the constraint of the system. The generated meaning is used to trigger an action that will satisfy the constraint. The generated meaning links the system to its environment. A Meaning Generator System (MGS) has been introduced as a building block for higher level systems (agents). The MGS allows to link sensation and action through the satisfaction of the constraint of the system/agent. We use the MGS in a model which is based on constraint satisfaction for sensori-motor coordination in agents, be they organic or artificial. The meaning is generated by and for the agent that hosts the MGS. Such approach makes possible an addressing of the concept of autonomy through the intrinsic or artificial nature of the constraint to be satisfied (organisms with intrinsic constraints/autonomy, artificial systems with artificial constraints/autonomy). The systemic nature of the MGS also allows to position the groundings of the generated meaning as being in or out of the MGS, and correspondingly identify the constructivist and objectivist components of the generated meaning. &#13;
The approach presented here makes available a sensori-motor coordination by meaning generation through constraint satisfaction with groundings of the generated meaning.&#13;
</summary>
  <author>
    <name>Mr Christophe Menant</name>
    <email>christophe.menant@hotmail.fr</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/6698/Atom/cogprints-eprint-6698.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/6698"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/6698/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/6698/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/6698"/>
  <published>2009-11-14T11:34:42Z</published>
  <updated>2011-03-11T08:57:32Z</updated>
  <id>http://cogprints.org/id/eprint/6698</id>
  <category term="preprint" label="Preprint" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/6698"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/6698</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/6698">
    <sword:depositedOn>2009-11-14T11:34:42Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Natural Variation and Neuromechanical Systems</title>
  <summary type="xhtml">Natural variation plays an important but subtle and often ignored role in neuromechanical systems. This is especially important when designing for living or hybrid systems &#13;
which involve a biological or self-assembling component. Accounting for natural variation can be accomplished by taking a population phenomics approach to modeling and analyzing such systems. I will advocate the position that noise in neuromechanical systems is partially represented by natural variation inherent in user physiology. Furthermore, this noise can be augmentative in systems that couple physiological systems with technology. There are several tools and approaches that can be borrowed from computational biology to characterize the populations of users as they interact with the technology. In addition to transplanted approaches, the potential of natural variation can be understood as having a range of effects on both the individual's physiology and function of the living/hybrid system over time. Finally, accounting for natural variation can be put to good use in human-machine system design, as three prescriptions for exploiting variation in design are proposed.</summary>
  <author>
    <name>Bradly Alicea</name>
    <email>freejumper@yahoo.com</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/6820/Atom/cogprints-eprint-6820.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/6820"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/6820/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/6820/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/6820"/>
  <published>2010-04-04T16:15:09Z</published>
  <updated>2011-03-11T08:57:36Z</updated>
  <id>http://cogprints.org/id/eprint/6820</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/6820"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/6820</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/6820">
    <sword:depositedOn>2010-04-04T16:15:09Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Sensorimotor process with constraint satisfaction. Grounding of meaning.</title>
  <summary type="xhtml">An approach to meaning generation that allows groundings of the meaning is used [1]. Considering that meaning generation is key to cognition, such approach goes with grounding of cognition in sensorimotor coordination. &#13;
Starting point is meaning generation by a system submitted to a constraint when it receives an incident information that has a connection with the constraint. The system generates a meaningful information (a meaning) in order to trigger an action that will satisfy its constraint. The generated meaning is precisely the connection existing between the received information and the constraint[2]. This simple process allows defining a Meaning Generator System (MGS) as well as groundings of the meaning in /out of the MGS. The action modifies the environment and the generated meaning. &#13;
This provides some coverage for sensorimotor coordination, and also brings on the same picture constructivist and objectivist aspects. &#13;
</summary>
  <author>
    <name>Mr Christophe Menant</name>
    <email>christophe.menant@hotmail.fr</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/6634/Atom/cogprints-eprint-6634.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/6634"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/6634/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/6634/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/6634"/>
  <published>2009-10-15T22:57:21Z</published>
  <updated>2011-03-11T08:57:25Z</updated>
  <id>http://cogprints.org/id/eprint/6634</id>
  <category term="preprint" label="Preprint" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/6634"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/6634</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/6634">
    <sword:depositedOn>2009-10-15T22:57:21Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Dysfunctions of highly parallel real-time machines as 'developmental disorders': Security concerns and a Caveat Emptor</title>
  <summary type="xhtml">A cognitive paradigm for gene expression in developmental biology that is based on rigorous application of the asymptotic limit theorems of information theory can be adapted to highly parallel real-time computing. The coming Brave New World of massively parallel 'autonomic' and 'Self-X' machines driven by the explosion of multiple core and molecular computing technologies will not be spared patterns of canonical and idiosyncratic failure analogous to the developmental disorders affecting organisms that have had the relentless benefit of a billion years of evolutionary pruning. This paper provides a warning both to potential users of these machines and, given that many such disorders can be induced by external agents, to those concerned with larger scale matters of homeland security.</summary>
  <author>
    <name>Rodrick Wallace</name>
    <email>wallace@pi.cpmc.columbia.edu</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/7957/Atom/cogprints-eprint-7957.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/7957"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/7957/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/7957/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/7957"/>
  <published>2012-11-09T17:46:08Z</published>
  <updated>2012-11-09T17:46:08Z</updated>
  <id>http://cogprints.org/id/eprint/7957</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/7957"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/7957</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/7957">
    <sword:depositedOn>2012-11-09T17:46:08Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Modal Similarity </title>
  <summary type="xhtml">Just as Boolean rules define Boolean categories, the Boolean operators define higher-order Boolean categories referred to as modal categories. We examine the similarity order between these categories and the standard category of logical identity (i.e. the modal category defined by the biconditional or equivalence operator). Our goal is 4-fold: first, to introduce a similarity measure for determining this similarity order; second, to show that such a measure is a good predictor of the similarity assessment behaviour observed in our experiment involving key modal categories; third, to argue that as far as the modal categories are concerned, configural similarity assessment may be componential or analytical in nature; and lastly, to draw attention to the intimate interplay that may exist between deductive judgments, similarity assessment and categorisation.</summary>
  <author>
    <name>Dr.  Ronaldo  Vigo </name>
    <email>vigo@ohio.edu</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/6238/Atom/cogprints-eprint-6238.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/6238"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/6238/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/6238/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/6238"/>
  <published>2008-10-22T01:17:17Z</published>
  <updated>2011-03-11T08:57:13Z</updated>
  <id>http://cogprints.org/id/eprint/6238</id>
  <category term="bookchapter" label="Book Chapter" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/6238"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/6238</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/6238">
    <sword:depositedOn>2008-10-22T01:17:17Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">An Open-Source Simulator for Cognitive Robotics Research: The Prototype of the iCub Humanoid Robot Simulator</title>
  <summary type="xhtml">This paper presents the prototype of a new computer simulator for the humanoid robot iCub. The iCub is a new open-source humanoid robot developed as a result of the “RobotCub” project, a collaborative European project aiming at developing a new open-source cognitive robotics platform. The iCub simulator has been developed as part of a joint effort with the European project “ITALK” on the integration and transfer of action and language knowledge in cognitive robots. This is available open-source to all researchers interested in cognitive robotics experiments with the iCub humanoid platform.</summary>
  <author>
    <name>Vadim Tikhanoff</name>
    <email/>
  </author>
  <author>
    <name>Angelo Cangelosi</name>
    <email/>
  </author>
  <author>
    <name>Paul Fitzpatrick</name>
    <email/>
  </author>
  <author>
    <name>Giorgio Metta</name>
    <email/>
  </author>
  <author>
    <name>Lorenzo Natale</name>
    <email/>
  </author>
  <author>
    <name>Francesco Nori</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/6237/Atom/cogprints-eprint-6237.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/6237"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/6237/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/6237/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/6237"/>
  <published>2008-10-22T01:17:40Z</published>
  <updated>2011-03-11T08:57:13Z</updated>
  <id>http://cogprints.org/id/eprint/6237</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/6237"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/6237</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/6237">
    <sword:depositedOn>2008-10-22T01:17:40Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Evolution of Prehension Ability in an Anthropomorphic Neurorobotic Arm</title>
  <summary type="xhtml">In this paper we show how a simulated anthropomorphic robotic arm controlled by an artificial neural network can develop effective reaching and grasping behaviour through a trial and error process in which the free parameters encode the control rules which regulate the fine-grained interaction between the robot and the environment and variations of the free parameters are retained or discarded on the basis of their effects at the level of the global behaviour exhibited by the robot situated in the environment. The obtained results demonstrate how the proposed methodology allows the robot to produce effective behaviours thanks to its ability to exploit the morphological properties of the robot’s body (i.e. its anthropomorphic shape, the elastic properties of its muscle-like actuators, and the compliance of its actuated joints) and the properties which arise from the physical interaction between the robot and the environment mediated by appropriate control rules.</summary>
  <author>
    <name>Prof Angelo Cangelosi</name>
    <email>acangelosi@plymouth.ac.uk</email>
  </author>
  <author>
    <name>Gianluca Massera</name>
    <email/>
  </author>
  <author>
    <name>Stefano Nolfi</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/6402/Atom/cogprints-eprint-6402.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/6402"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/6402/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/6402/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/6402"/>
  <published>2009-03-28T09:29:33Z</published>
  <updated>2011-03-11T08:57:20Z</updated>
  <id>http://cogprints.org/id/eprint/6402</id>
  <category term="bookchapter" label="Book Chapter" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/6402"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/6402</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/6402">
    <sword:depositedOn>2009-03-28T09:29:33Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">At the Potter’s Wheel: An Argument for Material Agency</title>
  <summary type="xhtml">Consider a potter throwing a vessel on the wheel. Think of the complex ways brain, body, wheel and clay relate and interact with one another throughout the different stages of this activity and try to imagine some of the resources (physical, mental or biological) needed for the enaction of this creative process. Focus, for instance, on the first minutes of action when the potter attempts to centre the lump of clay on the wheel. The hands are grasping the clay. The fingers, bent slightly following the surface curvature, sense the clay and exchange vital tactile information necessary for a number of crucial decisions that are about to follow in the next few seconds. What is it that guides&#13;
the dextrous positioning of the potter’s hands and decides upon the precise amount of forward or downward pressure necessary for centring a lump of clay on the wheel? How do the potter’s fingers come to know the precise force of the&#13;
appropriate grip? What makes these questions even more fascinating is the ease by which the phenomena which they describe are accomplished. Yet underlying the effortless manner in which the potter’s hand reaches for and gradually&#13;
shapes the wet clay lies a whole set of conceptual challenges to some of our most deeply entrenched assumptions about what it means to be a human agent.</summary>
  <author>
    <name>Dr Lambros Malafouris</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/5473/Atom/cogprints-eprint-5473.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/5473"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/5473/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/5473/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/5473"/>
  <published>2007-04-04Z</published>
  <updated>2011-03-11T08:56:49Z</updated>
  <id>http://cogprints.org/id/eprint/5473</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/5473"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/5473</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/5473">
    <sword:depositedOn>2007-04-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Intrinsic Motivation Systems for Autonomous Mental Development</title>
  <summary type="xhtml">Exploratory activities seem to be intrinsically rewarding
for children and crucial for their cognitive development.
Can a machine be endowed with such an intrinsic motivation
system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development.The complexity of the robot’s activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations
which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology.
Key words: Active learning, autonomy, behavior, complexity,
curiosity, development, developmental trajectory, epigenetic
robotics, intrinsic motivation, learning, reinforcement learning,
values.
</summary>
  <author>
    <name>Pierre-Yves Oudeyer</name>
    <email/>
  </author>
  <author>
    <name>Frédéric Kaplan</name>
    <email/>
  </author>
  <author>
    <name>Véréna Hafner</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3322/Atom/cogprints-eprint-3322.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3322"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3322/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3322/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3322"/>
  <published>2006-09-17Z</published>
  <updated>2011-03-11T08:55:25Z</updated>
  <id>http://cogprints.org/id/eprint/3322</id>
  <category term="bookchapter" label="Book Chapter" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3322"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3322</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3322">
    <sword:depositedOn>2006-09-17Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence</title>
  <summary type="xhtml">This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing.</summary>
  <author>
    <name>Stevan Harnad</name>
    <email>63</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/6718/Atom/cogprints-eprint-6718.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/6718"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/6718/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/6718/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/6718"/>
  <published>2009-11-14T11:30:26Z</published>
  <updated>2011-03-11T08:57:33Z</updated>
  <id>http://cogprints.org/id/eprint/6718</id>
  <category term="other" label="Other" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/6718"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/6718</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/6718">
    <sword:depositedOn>2009-11-14T11:30:26Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Asimov's Coming Back</title>
  <summary type="xhtml">Ever since the word ‘ROBOT’ first appeared in a science&#13;
fiction in 1921, scientists and engineers have been trying&#13;
different ways to create it. Present technologies in&#13;
mechanical and electrical engineering makes it possible&#13;
to have robots in such places as industrial manufacturing&#13;
and assembling lines. Although they are&#13;
essentially robotic arms or similarly driven by electrical&#13;
power and signal control, they could be treated the&#13;
primitive pioneers in application. Researches in the&#13;
laboratories go much further. Interdisciplines are&#13;
directing the evolution of more advanced robots. Among these are artificial&#13;
intelligence, computational neuroscience, mathematics and robotics. These disciplines&#13;
come closer as more complex problems emerge.&#13;
From a robot’s point of view, three basic abilities are needed. They are thinking&#13;
and memory, sensory perceptions, control and behaving. These are capabilities we&#13;
human beings have to adapt ourselves to the environment. Although&#13;
researches on robots, especially on intelligent thinking, progress slowly, a revolution&#13;
for biological inspired robotics is spreading out in the laboratories all over the world.</summary>
  <author>
    <name>Mr. L. Wang</name>
    <email>liyu.wang@wadh.oxon.org</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4729/Atom/cogprints-eprint-4729.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4729"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4729/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4729/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4729"/>
  <published>2006-03-06Z</published>
  <updated>2011-03-11T08:56:20Z</updated>
  <id>http://cogprints.org/id/eprint/4729</id>
  <category term="preprint" label="Preprint" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4729"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4729</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4729">
    <sword:depositedOn>2006-03-06Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">New mathematical foundations for AI and Alife: Are the necessary conditions for animal consciousness sufficient for the design of intelligent machines?</title>
  <summary type="xhtml">Rodney Brooks' call for 'new mathematics' to revitalize the disciplines of artificial intelligence and artificial life can be answered by adaptation of what Adams has called 'the informational turn in philosophy' and by the novel perspectives that program gives into empirical studies of animal cognition and consciousness. Going backward from the necessary conditions communication theory imposes on cognition and consciousness to sufficient conditions for machine design is, however, an extraordinarily difficult engineering task. The most likely use for the first generations of conscious machines will be to model the various forms of psychopathology, since we have little or no understanding of how consciousness is stabilized in humans or other animals.</summary>
  <author>
    <name>Rodrick Wallace</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/5149/Atom/cogprints-eprint-5149.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/5149"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/5149/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/5149/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/5149"/>
  <published>2006-09-17Z</published>
  <updated>2011-03-11T08:56:36Z</updated>
  <id>http://cogprints.org/id/eprint/5149</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/5149"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/5149</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/5149">
    <sword:depositedOn>2006-09-17Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Discovering Communication</title>
  <summary type="xhtml">What kind of motivation drives child language development? This
article presents a computational model and a robotic experiment to articulate
the hypothesis that children discover communication as a result
of exploring and playing with their environment. The considered
robotic agent is intrinsically motivated towards situations in which
it optimally progresses in learning. To experience optimal learning
progress, it must avoid situations already familiar but also situations
where nothing can be learnt. The robot is placed in an environment in
which both communicating and non-communicating objects are present.
As a consequence of its intrinsic motivation, the robot explores this environment
in an organized manner focusing first on non-communicative
activities and then discovering the learning potential of certain types of
interactive behaviour. In this experiment, the agent ends up being interested
by communication through vocal interactions without having
a specific drive for communication.</summary>
  <author>
    <name>Dr. P-Y. Oudeyer</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/5222/Atom/cogprints-eprint-5222.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/5222"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/5222/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/5222/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/5222"/>
  <published>2006-10-15Z</published>
  <updated>2011-03-11T08:56:39Z</updated>
  <id>http://cogprints.org/id/eprint/5222</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/5222"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/5222</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/5222">
    <sword:depositedOn>2006-10-15Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Evolution of Neural Networks for Helicopter Control: Why Modularity Matters</title>
  <summary type="xhtml">The problem of the automatic development of controllers for vehicles for which the exact characteristics are not known is considered in the context of miniature helicopter flocking. A methodology is proposed in which neural network based controllers are evolved in a simulation using a dynamic model qualitatively similar to the physical helicopter. Several network architectures and evolutionary sequences are investigated, and two approaches are found that can evolve very competitive controllers. The division of the neural network into modules and of the task into incremental steps seems to be a precondition for success, and we analyse why this might be so.</summary>
  <author>
    <name>Renzo De Nardi</name>
    <email/>
  </author>
  <author>
    <name>Julian Togelius</name>
    <email/>
  </author>
  <author>
    <name>Owen Holland</name>
    <email/>
  </author>
  <author>
    <name>Simon M. Lucas</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4957/Atom/cogprints-eprint-4957.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4957"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4957/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4957/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4957"/>
  <published>2006-07-03Z</published>
  <updated>2011-03-11T08:56:28Z</updated>
  <id>http://cogprints.org/id/eprint/4957</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4957"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4957</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4957">
    <sword:depositedOn>2006-07-03Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Evolution of Representations and Intersubjectivity as sources of the Self. An Introduction to the Nature of Self-Consciousness.</title>
  <summary type="xhtml">It is agreed by most people that self-consciousness is the result of an evolutionary process, and that      representations may have played an important role in that process. We would like to propose here that some   evolutionary stages can highlight links existing between representations and the notion of self, opening a possible   path to the nature of self-consciousness.   Our starting point is to focus on representations as usage oriented items for the subject that carries them. These     representations are about elements of the environment including conspecifics, and can also represent parts of the   subject without refering to a notion of self (we introduce the notion of "auto-representation" that does not carry the     notion of self-representation). Next step uses the performance of intersubjectivity (mirror neurons level in evolution)  
  where a subject has the capability to mentally simulate the observed action of a conspecific (Gallese 2001). We    propose that this intersubjectivity allows the subject to identify his auto-representation with the representations of   his conspecifics, and so to consider his auto-representation as existing in the environment. We show how this   evolutionary stage can introduce a notion of self-representation for a subject, opening a road to self-conciousness   and to self. This evolutionary approach to the self via self- representation is close to the current theory of the self   linked to representations and simulations (Metzinger 2003). We use a scenario about how evolution has brought   the performance of self-representation to self-consciousness. We develop a process describing how the anxiety   increase resulting from identification with endangered or suffering conspecifics may have called for the development   of tools to limit this anxiety (empathy, imitation, language), and how these tools have accelerated the evolutionary   process through a positive feedback on intersubjectivity (Menant 2004, 2005). We finish by summarizing the points   addressed, and propose some possible continuations. 
</summary>
  <author>
    <name>Christophe Menant</name>
    <email>christophe.menant@hotmail.fr</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4843/Atom/cogprints-eprint-4843.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4843"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4843/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4843/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4843"/>
  <published>2006-04-21Z</published>
  <updated>2011-03-11T08:56:23Z</updated>
  <id>http://cogprints.org/id/eprint/4843</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4843"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4843</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4843">
    <sword:depositedOn>2006-04-21Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Evolution of Representations. From Basic Life to Self-Representation and Self-Consciousness</title>
  <summary type="xhtml">The notion of representation is at the foundation of cognitive sciences and is used in theories of mind and consciousness. Other notions like ‘embodiment’, 'intentionality‘, 'guidance theory' or ‘biosemantics’ have been associated to the notion of representation to introduce its functional aspect. We would like to propose here that a conception of 'usage related' representation eases its positioning in an evolutionary context, and opens new areas of investigation toward self-representation and self-consciousness. The subject is presented in five parts:Following an overall presentation, the first part introduces a usage related representation as being an information managed by a system submitted to a constraint that has to be satisfied. We consider that such a system can generate a meaningful information by comparing its constraint to a received information (Menant 2003). We define a representation as being made of the received information and of the meaningful information. Such approach allows groundings in and out for the representation relatively to the system. The second part introduces the two types of representations we want to focus on for living organisms: representations of conspecifics and auto-representation, the latter being defined without using a notion of self-representation. Both types of representations have existed for our  pre-human ancestors which can be compared to today great apes.In the third part, we use the performance of intersubjectivity as identified in group life with the presence of mirror neurons in the organisms. Mirror neurons have been discovered in the 90‘s (Rizzolatti &amp; al.1996, Gallese &amp; al.1996). The level of intersubjectivity that can be attributed to non human primates as related to mirror neurons is currently a subject of debate (Decety 2003). We consider that a limited intersubjectivity between pre-human primates made possible a merger of both types of representations. The fourth part proposes that such a merger of representations feeds the auto-representation with the meanings associated to the representations of conspecifics, namely the meanings associated to an entity perceived as existing in the environment. We propose that auto-representation carrying these new meanings makes up the first elements of self-representation. Intersubjectivity has allowed auto-representation to evolve into self-representation, avoiding the homunculus risk.  
The fifth part is a continuation to other presentations (Menant 2004, 2005) about possible evolution of self-representation into self-consciousness. We propose that identification with suffering or endangered conspecifics has increased anxiety, and that the tools used to limit this anxiety (development of empathy, imitation, language and group life) have provided a positive feedback on intersubjectivity and created an evolutionary engine for the organism. Other outcomes have also been possible. Such approach roots consciousness in emotions. 
The evolutionary scenario proposed here does not introduce explicitly the question of phenomenal consciousness (Block 1995). This question is to be addressed later with the help of this scenario.The conclusion lists the points introduced here with their possible continuations.
</summary>
  <author>
    <name>Christophe Menant</name>
    <email>christophe.menant@hotmail.fr</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/5569/Atom/cogprints-eprint-5569.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/5569"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/5569/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/5569/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/5569"/>
  <published>2007-05-28Z</published>
  <updated>2011-03-11T08:56:51Z</updated>
  <id>http://cogprints.org/id/eprint/5569</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/5569"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/5569</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/5569">
    <sword:depositedOn>2007-05-28Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">SwarMAV: A Swarm of Miniature Aerial Vehicles</title>
  <summary type="xhtml">As the MAV (Micro or Miniature Aerial Vehicles) field matures, we expect to see that the platform's degree of autonomy, the information exchange, and the coordination with other manned and unmanned actors, will become at least as crucial as its aerodynamic design. The project described in this paper explores some aspects of a particularly exciting possible avenue of development: an autonomous swarm of MAVs which exploits its inherent reliability (through redundancy), and its ability to exchange information among the members, in order to cope with a dynamically changing environment and achieve its mission. We describe the successful realization of a prototype experimental platform weighing only 75g, and outline a strategy for the automatic design of a suitable controller.</summary>
  <author>
    <name>Renzo De Nardi</name>
    <email/>
  </author>
  <author>
    <name>Owen Holland</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/5571/Atom/cogprints-eprint-5571.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/5571"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/5571/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/5571/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/5571"/>
  <published>2007-05-28Z</published>
  <updated>2011-03-11T08:56:51Z</updated>
  <id>http://cogprints.org/id/eprint/5571</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/5571"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/5571</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/5571">
    <sword:depositedOn>2007-05-28Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">UltraSwarm: A Further Step Towards a Flock of Miniature Helicopters
</title>
  <summary type="xhtml">We describe further progress towards the development of a
MAV (micro aerial vehicle) designed as an enabling tool to investigate aerial flocking. Our research focuses on the use of low cost off the shelf vehicles and sensors to enable fast prototyping and to reduce development costs. Details on the design of the embedded electronics and the
modification of the chosen toy helicopter are presented, and the technique used for state estimation is described. The fusion of inertial data through an unscented Kalman filter is used to estimate the helicopter’s state, and this forms the main input to the control system. Since no detailed dynamic model of the helicopter in use is available, a method is proposed for automated system identification, and for subsequent controller design based on artificial evolution. Preliminary results obtained with a dynamic simulator of a helicopter are reported, along with some encouraging results for tackling the problem of flocking.</summary>
  <author>
    <name>Renzo De nardi</name>
    <email/>
  </author>
  <author>
    <name>Owen Holland</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3027/Atom/cogprints-eprint-3027.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3027"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3027/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3027/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3027"/>
  <published>2003-06-19Z</published>
  <updated>2011-03-11T08:55:18Z</updated>
  <id>http://cogprints.org/id/eprint/3027</id>
  <category term="bookchapter" label="Book Chapter" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3027"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3027</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3027">
    <sword:depositedOn>2003-06-19Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Cognition is categorization</title>
  <summary type="xhtml">We organisms are sensorimotor systems. The things in the world come in contact with our sensory surfaces, and we interact with them based on what that sensorimotor contact “affords”. All of our categories consist in ways we behave differently toward different kinds of things -- things we do or don’t eat, mate-with, or flee-from, or the things that we describe, through our language, as prime numbers, affordances, absolute discriminables, or truths. That is all that cognition is for, and about.</summary>
  <author>
    <name>Stevan Harnad</name>
    <email>63</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4979/Atom/cogprints-eprint-4979.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4979"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4979/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4979/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4979"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:29Z</updated>
  <id>http://cogprints.org/id/eprint/4979</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4979"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4979</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4979">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Autonomous learning and reproduction of complex
sequences: a multimodal architecture for
bootstraping imitation games</title>
  <summary type="xhtml">This paper introduces a control architecture
for the learning of complex sequence of gestures
applied to autonomous robots. The architecture
is designed to exploit the robot internal
sensory-motor dynamics generated by
visual, proprioceptive, and predictive informations
in order to provide intuitive behaviors
in the purpose of natural interactions
with humans.</summary>
  <author>
    <name>Pierre Andry</name>
    <email/>
  </author>
  <author>
    <name>Philippe Gaussier</name>
    <email/>
  </author>
  <author>
    <name>Jacqueline Nadel</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/5572/Atom/cogprints-eprint-5572.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/5572"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/5572/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/5572/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/5572"/>
  <published>2007-05-28Z</published>
  <updated>2011-03-11T08:56:51Z</updated>
  <id>http://cogprints.org/id/eprint/5572</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/5572"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/5572</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/5572">
    <sword:depositedOn>2007-05-28Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Beyond swarm intelligence: The Ultraswarm</title>
  <summary type="xhtml">This paper explores the idea that it may be possible to
combine two ideas – UAV flocking, and wireless cluster
computing – in a single system, the UltraSwarm. The
possible advantages of such a system are considered, and
solutions to some of the technical problems are identified.
Initial work on constructing such a system based around
miniature electric helicopters is described.</summary>
  <author>
    <name>Owen Holland</name>
    <email/>
  </author>
  <author>
    <name>John Woods</name>
    <email/>
  </author>
  <author>
    <name>Renzo De Nardi</name>
    <email/>
  </author>
  <author>
    <name>Adrian Clarck</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4981/Atom/cogprints-eprint-4981.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4981"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4981/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4981/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4981"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:29Z</updated>
  <id>http://cogprints.org/id/eprint/4981</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4981"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4981</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4981">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Covert Perceptual Capability Development</title>
  <summary type="xhtml">In this paper, we propose a model to develop
robots’ covert perceptual capability using reinforcement learning. Covert perceptual behavior is treated as action selected by a motivational system. We apply this model to
vision-based navigation. The goal is to enable
a robot to learn road boundary type. Instead
of dealing with problems in controlled environments with a low-dimensional state space,
we test the model on images captured in non-stationary environments. Incremental Hierarchical Discriminant Regression is used to
generate states on the fly. Its coarse-to-fine
tree structure guarantees real-time retrieval
in high-dimensional state space. K Nearest-Neighbor strategy is adopted to further reduce training time complexity.</summary>
  <author>
    <name>Xiao Huang</name>
    <email/>
  </author>
  <author>
    <name>Juyang Weng</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4969/Atom/cogprints-eprint-4969.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4969"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4969/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4969/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4969"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:29Z</updated>
  <id>http://cogprints.org/id/eprint/4969</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4969"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4969</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4969">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Developmental acquisition of entrainment skills in
robot swinging using van der Pol oscillators</title>
  <summary type="xhtml">In this study we investigated the effects of different
morphological configurations on a robot swinging
task using van der Pol oscillators. The task was
examined using two separate degrees of freedom
(DoF), both in the presence and absence of neural
entrainment. Neural entrainment stabilises the
system, reduces time-to-steady state and relaxes the
requirement for a strong coupling with the
environment in order to achieve mechanical
entrainment. It was found that staged release of the
distal DoF does not have any benefits over using both
DoF from the onset of the experimentation. On the
contrary, it is less efficient, both with respect to the
time needed to reach a stable oscillatory regime and
the maximum amplitude it can achieve. The same
neural architecture is successful in achieving
neuromechanical entrainment for a robotic walking
task.</summary>
  <author>
    <name>Paschalis Veskos</name>
    <email/>
  </author>
  <author>
    <name>Yiannis Demiris</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4983/Atom/cogprints-eprint-4983.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4983"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4983/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4983/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4983"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:29Z</updated>
  <id>http://cogprints.org/id/eprint/4983</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4983"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4983</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4983">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Discovering Motion Flow by
Temporal-Informational Correlations in Sensors</title>
  <summary type="xhtml">A method is presented for adapting the sensors
of a robot to its current environment and
to learn motion flow detection by observing
the informational relations between sensors
and actuators. Examples are shown where
the robot learns to detect motion flow from
sensor data generated by its own movement.</summary>
  <author>
    <name>Lars Olsson</name>
    <email/>
  </author>
  <author>
    <name>Chrystopher L. Nehaniv</name>
    <email/>
  </author>
  <author>
    <name>Daniel Polani</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4404/Atom/cogprints-eprint-4404.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4404"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4404/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4404/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4404"/>
  <published>2005-06-19Z</published>
  <updated>2011-03-11T08:56:06Z</updated>
  <id>http://cogprints.org/id/eprint/4404</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4404"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4404</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4404">
    <sword:depositedOn>2005-06-19Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Distributed Processes, Distributed Cognizers and Collaborative Cognition</title>
  <summary type="xhtml">Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate  what  doing (“know-how”) This is called the Turing Test. It cannot test whether  a process can generate feeling, hence thinking -- only whether  it can generate doing. The processes that generate thinking and know-how are “distributed” within the heads of thinkers, but not across thinkers’ heads. Hence there is no such thing as distributed cognition, only collaborative  cognition. Email and the Web have spawned a new form of collaborative cognition that draws upon individual brains’ real-time interactive  potential  in ways that were not possible in oral, written  or print interactions. 

</summary>
  <author>
    <name>Stevan Harnad</name>
    <email>63</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4974/Atom/cogprints-eprint-4974.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4974"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4974/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4974/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4974"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:29Z</updated>
  <id>http://cogprints.org/id/eprint/4974</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4974"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4974</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4974">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Dynamical Systems Approach to Infant Motor
Development: Implications for Epigenetic Robotics</title>
  <summary type="xhtml"> </summary>
  <author>
    <name>Eugene C. Goldfield</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4967/Atom/cogprints-eprint-4967.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4967"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4967/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4967/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4967"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:29Z</updated>
  <id>http://cogprints.org/id/eprint/4967</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4967"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4967</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4967">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Exploiting Vestibular Output during Learning
Results in Naturally Curved Reaching Trajectories</title>
  <summary type="xhtml">Teaching a humanoid robot to reach for a
visual target is a complex problem in part because
of the high dimensionality of the control
space. In this paper, we demonstrate a biologically
plausible simplification of the reaching
process that replaces the degrees of freedom
in the neck of the robot with sensory readings
from a vestibular system. We show that
this simplification introduces errors that are
easily overcome by a standard learning algorithm.
Furthermore, the errors that are necessarily
introduced by this simplification result
in reaching trajectories that are curved in the
same way as human reaching trajectories.</summary>
  <author>
    <name>Ganghua Sun</name>
    <email/>
  </author>
  <author>
    <name>Brian Scassellati</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4995/Atom/cogprints-eprint-4995.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4995"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4995/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4995/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4995"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:31Z</updated>
  <id>http://cogprints.org/id/eprint/4995</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4995"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4995</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4995">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">A formal approach of developmental robotics and psychology</title>
  <summary type="xhtml"> </summary>
  <author>
    <name>Ken Prepin</name>
    <email/>
  </author>
  <author>
    <name>Philippe Gaussier</name>
    <email/>
  </author>
  <author>
    <name>Arnaud Revel</name>
    <email/>
  </author>
  <author>
    <name>Jacqueline Nadel</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4939/Atom/cogprints-eprint-4939.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4939"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4939/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4939/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4939"/>
  <published>2006-07-16Z</published>
  <updated>2011-03-11T08:56:28Z</updated>
  <id>http://cogprints.org/id/eprint/4939</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4939"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4939</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4939">
    <sword:depositedOn>2006-07-16Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">From Imprinting to Adaptation: Building a History of Affective Interaction</title>
  <summary type="xhtml">We present a Perception-Action architecture and experiments to simulate imprinting—the establishment of strong attachment links with a “caregiver”—in a robot. Following recent theories, we do not consider imprinting as rigidly timed and irreversible, but as a more flexible phenomenon that allows for further adaptation as a result of reward-based learning through experience. Our architecture reconciles these two types of perceptual learning traditionally considered as different and even incompatible. After the initial imprinting, adaptation is achieved in the context of a history of “affective” interactions between the robot and a human, driven by “distress” and “comfort” responses in the robot.</summary>
  <author>
    <name>Arnaud J. Blanchard</name>
    <email/>
  </author>
  <author>
    <name>Lola Canamero</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4961/Atom/cogprints-eprint-4961.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4961"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4961/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4961/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4961"/>
  <published>2006-07-16Z</published>
  <updated>2011-03-11T08:56:28Z</updated>
  <id>http://cogprints.org/id/eprint/4961</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4961"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4961</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4961">
    <sword:depositedOn>2006-07-16Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">From motor babbling to hierarchical learning by imitation: a robot developmental pathway</title>
  <summary type="xhtml">How does an individual use the knowledge
acquired through self exploration as a manipulable model through which to understand
others and benefit from their knowledge?
How can developmental and social learning be
combined for their mutual benefit? In this
paper we review a hierarchical architecture
(HAMMER) which allows a principled way
for combining knowledge through exploration
and knowledge from others, through the creation and use of multiple inverse and forward
models. We describe how Bayesian Belief Networks can be used to learn the association
between a robot’s motor commands and sensory consequences (forward models), and how
the inverse association can be used for imitation. Inverse models created through self
exploration, as well as those from observing
others can coexist and compete in a principled unified framework, that utilises the simulation theory of mind approach to mentally
rehearse and understand the actions of others.</summary>
  <author>
    <name>Yiannis Demiris</name>
    <email/>
  </author>
  <author>
    <name>Anthony Dearden</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4993/Atom/cogprints-eprint-4993.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4993"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4993/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4993/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4993"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:31Z</updated>
  <id>http://cogprints.org/id/eprint/4993</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4993"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4993</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4993">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">How can robots facilitate social interaction of children with autism?: Possible
implications for educational environments</title>
  <summary type="xhtml">Children with autism have difficulties in social
interaction with other people and much attention
in recent years has been directed to robots as therapy
tools. We studied the social interaction between
children with autism and robots longitudinally
to observe developmental changes in their
performance. We observed children at a special
school for six months and analyzed their performance
with robots. The results showed that two
children adapted to the experimental situations
and developed interaction with the robots. This
suggests that they changed their interaction with
the robots from an object-like one into an agentlike
one.</summary>
  <author>
    <name>Emi Miyamoto</name>
    <email/>
  </author>
  <author>
    <name>Mingyi Lee</name>
    <email/>
  </author>
  <author>
    <name>Hiroyuki Fujii</name>
    <email/>
  </author>
  <author>
    <name>Michio Okada</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4531/Atom/cogprints-eprint-4531.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4531"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4531/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4531/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4531"/>
  <published>2005-09-18Z</published>
  <updated>2011-03-11T08:56:10Z</updated>
  <id>http://cogprints.org/id/eprint/4531</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4531"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4531</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4531">
    <sword:depositedOn>2005-09-18Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Information and Meaning in Life, Humans and Robots</title>
  <summary type="xhtml">Information and meaning exist around us and within ourselves, and the same information can correspond to different meanings. This is true for humans and animals, and is becoming true for robots. 
We propose here an overview of this subject by using a systemic tool related to meaning generation that has already been published (C. Menant, Entropy 2003). 
The Meaning Generator System (MGS) is a system submitted to a constraint that generates a meaningful information when it receives an incident information that has a relation with the constraint. The content of the meaningful information is explicited, and its function is to 
trigger an action that will be used to satisfy the constraint of the system. 
The MGS has been introduced in the case of basic life submitted to a "stay alive" constraint. 
We propose here to see how the usage of the MGS can be extended to more complex living systems, to humans and to robots by introducing new types of constraints, and integrating the MGS into higher level systems. 
The application of the MGS to humans is partly based on a scenario relative to the evolution of body self-awareness toward self-consciousness that has already been presented 
(C. Menant, Biosemiotics 2003, and TSC 2004). 
The application of the MGS to robots is based on the definition of the MGS applied to robots functionality, taking into account the origins of the constraints. 
We conclude with a summary of this overview and with themes that can be linked to this systemic approach on meaning generation. 

</summary>
  <author>
    <name>Christophe Menant</name>
    <email>crmenant@free.fr</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4998/Atom/cogprints-eprint-4998.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4998"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4998/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4998/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4998"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:31Z</updated>
  <id>http://cogprints.org/id/eprint/4998</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4998"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4998</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4998">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Introduction: The Fifth International Workshop on
Epigenetic Robotics</title>
  <summary type="xhtml"> </summary>
  <author>
    <name>Luc Berthouze</name>
    <email/>
  </author>
  <author>
    <name>Frederic Kaplan</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4963/Atom/cogprints-eprint-4963.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4963"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4963/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4963/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4963"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:28Z</updated>
  <id>http://cogprints.org/id/eprint/4963</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4963"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4963</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4963">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">On the notion of motor primitives in humans and robots</title>
  <summary type="xhtml">This article reviews two reflexive motor patterns in
humans: Primitive reflexes and motor primitives. Both
terms coexist in the literature of motor development
and motor control, yet they are not synonyms. While
primitive reflexes are a part of the temporary motor
repertoire in early ontogeny, motor primitives refer to
sets of motor patterns that are considered basic units of
voluntary motor control thought to be present
throughout the life-span. The article provides an
overview of the anatomy and neurophysiology of
human reflexive motor patterns to elucidate that both
concepts are rooted in architecture of the spinal cord. I
will advocate that an understanding of the human
motor system that encompasses both primitive reflexes
and motor primitives as well as the interaction with
supraspinal motor centers will lead to an appreciation
of the richness of the human motor repertoire, which in
turn seems imperative for designing epigenetic robots
and highly adaptable human machine interfaces.</summary>
  <author>
    <name>Jürgen Konczak</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4966/Atom/cogprints-eprint-4966.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4966"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4966/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4966/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4966"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:29Z</updated>
  <id>http://cogprints.org/id/eprint/4966</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4966"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4966</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4966">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Ongoing Emergence:
A Core Concept in Epigenetic Robotics</title>
  <summary type="xhtml">We propose ongoing emergence as a core concept in
epigenetic robotics. Ongoing emergence refers to the
continuous development and integration of new skills
and is exhibited when six criteria are satisfied: (1)
continuous skill acquisition, (2) incorporation of new
skills with existing skills, (3) autonomous development
of values and goals, (4) bootstrapping of initial skills, (5)
stability of skills, and (6) reproducibility. In this paper
we: (a) provide a conceptual synthesis of ongoing
emergence based on previous theorizing, (b) review
current research in epigenetic robotics in light of ongoing
emergence, (c) provide prototypical examples of ongoing
emergence from infant development, and (d) outline
computational issues relevant to creating robots
exhibiting ongoing emergence.</summary>
  <author>
    <name>Christopher Prince</name>
    <email/>
  </author>
  <author>
    <name>Nathan Helder</name>
    <email/>
  </author>
  <author>
    <name>George Hollich</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4986/Atom/cogprints-eprint-4986.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4986"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4986/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4986/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4986"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:30Z</updated>
  <id>http://cogprints.org/id/eprint/4986</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4986"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4986</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4986">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Out in the World: What Did The Robot Hear And
See?</title>
  <summary type="xhtml"> </summary>
  <author>
    <name>Lijin Aryananda</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4994/Atom/cogprints-eprint-4994.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4994"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4994/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4994/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4994"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:31Z</updated>
  <id>http://cogprints.org/id/eprint/4994</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4994"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4994</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4994">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">A Platform for Education in ‘Interaction Design
for Adaptive Robots’</title>
  <summary type="xhtml">This paper introduces an educational
software platform for a small teddy-bear-like
robot, RobotPHONE. Utilizing back-drivability
of the three joints (totally 6 DOFs) of the robot,
the platform enables the robot to learn
correspondences between gestures (posed by a
human teacher) and voice (given by a human).
Because the motion and the voice can be
symmetrically produced and recognized, the
robot would be an ideal tool for the research
and education in learning and development.</summary>
  <author>
    <name>Oka Natsuki</name>
    <email/>
  </author>
  <author>
    <name>Ozaka Mitsuyoshi</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4990/Atom/cogprints-eprint-4990.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4990"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4990/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4990/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4990"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:31Z</updated>
  <id>http://cogprints.org/id/eprint/4990</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4990"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4990</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4990">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Robot Gesture Generation from Environmental
Sounds Using Inter-modality Mapping</title>
  <summary type="xhtml">We propose a motion generation model in
which robots presume the sound source of an
environmental sound and imitate its motion.
Sharing environmental sounds between humans
and robots enables them to share environmental
information. It is difficult to transmit
environmental sounds in human-robot
communications. We approached this problem
by focusing on the iconic gestures. Concretely,
robots presume the motion of the
sound source object and map it to the robot
motion. This method enabled robots to imitate
the motion of the sound source using
their bodies.</summary>
  <author>
    <name>Yuya Hattori</name>
    <email/>
  </author>
  <author>
    <name>Hideki Kozima</name>
    <email/>
  </author>
  <author>
    <name>Kazunori Komatani</name>
    <email/>
  </author>
  <author>
    <name>Tetsuya Ogata</name>
    <email/>
  </author>
  <author>
    <name>Hiroshi G. Okuno</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4992/Atom/cogprints-eprint-4992.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4992"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4992/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4992/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4992"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:31Z</updated>
  <id>http://cogprints.org/id/eprint/4992</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4992"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4992</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4992">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Robot Self-Characterisation of Experience Using
Trajectories in Sensory-Motor Phase Space</title>
  <summary type="xhtml">We describe sensorimotor phase-plots constructed
using information theoretical methods
from raw sensor data as a way for a robotic agent
to characterise its interactions and interaction
history. Measurements of the position and shape
of the trajectories, including fractal dimension,
can be used to characterise the agent-environment
interaction.</summary>
  <author>
    <name>Naeem A. Mirza</name>
    <email/>
  </author>
  <author>
    <name>Chrystopher Nehaniv</name>
    <email/>
  </author>
  <author>
    <name>Rene te Boekhorst</name>
    <email/>
  </author>
  <author>
    <name>Kerstin Dautenhahn</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4982/Atom/cogprints-eprint-4982.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4982"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4982/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4982/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4982"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:29Z</updated>
  <id>http://cogprints.org/id/eprint/4982</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4982"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4982</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4982">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The RobotCub Approach to the Development of Cognition</title>
  <summary type="xhtml">This paper elaborates on the workplan of an
initiative in embodied cognition: RobotCub. Our
goal here is to provide background and to
motivate our long-term plan of empirical
research including brain and robotic sciences
following the principles of epigenetic robotics.</summary>
  <author>
    <name>Giorgio Metta</name>
    <email/>
  </author>
  <author>
    <name>David Vernon</name>
    <email/>
  </author>
  <author>
    <name>Giulio Sandini</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4980/Atom/cogprints-eprint-4980.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4980"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4980/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4980/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4980"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:29Z</updated>
  <id>http://cogprints.org/id/eprint/4980</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4980"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4980</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4980">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Scaffolding Cognition with Words</title>
  <summary type="xhtml">We describe a set of experiments investigating the role
of natural language symbols in scaffolding situated
action. Agents are evolved to respond appropriately to
commands in order to perform simple tasks. We
explore three different conditions, which show a
significant advantage to the re-use of a public symbol
system, through self-cueing leading to qualitative
changes in performance. This is modelled by looping
spoken output via environment back to heard input.
We argue this work can be linked to, and sheds new
light on, the account of self-directed speech advanced
by the developmental psychologist Vygotsky in his
model of the development of higher cognitive function.</summary>
  <author>
    <name>Robert Clowes</name>
    <email/>
  </author>
  <author>
    <name>Anthony F. Morse</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4987/Atom/cogprints-eprint-4987.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4987"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4987/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4987/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4987"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:31Z</updated>
  <id>http://cogprints.org/id/eprint/4987</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4987"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4987</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4987">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Segmentation Stability: a Key Component for Joint
Attention</title>
  <summary type="xhtml"> </summary>
  <author>
    <name>Jean-Christophe Baillie</name>
    <email/>
  </author>
  <author>
    <name>Matthieu Nottale</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4989/Atom/cogprints-eprint-4989.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4989"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4989/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4989/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4989"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:31Z</updated>
  <id>http://cogprints.org/id/eprint/4989</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4989"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4989</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4989">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Speeding up Learning with Dynamic Environment
Shaping in Evolutionary Robotics</title>
  <summary type="xhtml">Evolutionary Robotics is a promising approach
to automatically build efficient controllers
using stochastic optimization techniques.
However, works in this area are often
confronted to complex environments where
even simple tasks cannot be achieved. In
the scope of this paper, we propose an approach
based on explicit problem decomposition
and dynamic environment shaping to
ease the learning task.</summary>
  <author>
    <name>Nicolas Bredeche</name>
    <email/>
  </author>
  <author>
    <name>Louis Hugues</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4968/Atom/cogprints-eprint-4968.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4968"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4968/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4968/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4968"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:29Z</updated>
  <id>http://cogprints.org/id/eprint/4968</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4968"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4968</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4968">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Tapping into Touch</title>
  <summary type="xhtml">Humans use a set of exploratory procedures
to examine object properties through grasping
and touch. Our goal is to exploit similar
methods with a humanoid robot to enable developmental
learning about manipulation. We
use a compliant robot hand to find objects
without prior knowledge of their presence or
location, and then tap those objects with a
finger. This behavior lets the robot generate
and collect samples of the contact sound produced
by impact with that object. We demonstrate
the feasibility of recognizing objects by
their sound, and relate this to human performance
under situations analogous to that of
the robot.</summary>
  <author>
    <name>Eduardo Torres-Jara</name>
    <email/>
  </author>
  <author>
    <name>Lorenzo Natale</name>
    <email/>
  </author>
  <author>
    <name>Paul Fitzpatrick</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4985/Atom/cogprints-eprint-4985.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4985"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4985/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4985/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4985"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:30Z</updated>
  <id>http://cogprints.org/id/eprint/4985</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4985"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4985</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4985">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Towards Teaching a Robot to Count Objects</title>
  <summary type="xhtml">We present here an example of incremental
learning between two computational models
dealing with different modalities: a model allowing
to switch spatial visual attention and a
model allowing to learn the ordinal sequence
of phonetical numbers. Their merging via a
common reward signal allows anyway to produce
a cardinal counting behaviour that can
be implemented on a robot.</summary>
  <author>
    <name>Julien Vitay</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4976/Atom/cogprints-eprint-4976.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4976"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4976/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4976/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4976"/>
  <published>2006-07-23Z</published>
  <updated>2011-03-11T08:56:29Z</updated>
  <id>http://cogprints.org/id/eprint/4976</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4976"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4976</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4976">
    <sword:depositedOn>2006-07-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Using social robots to study
abnormal social development</title>
  <summary type="xhtml">Social robots recognize and respond to human
social cues with appropriate behaviors.
Social robots, and the technology used in their
construction, can be unique tools in the study
of abnormal social development. Autism is a
pervasive developmental disorder that is characterized
by social and communicative impairments.
Based on three years of integration
and immersion with a clinical research
group which performs more than 130 diagnostic
evaluations of children for autism per
year, this paper discusses how social robots
will make an impact on the ways in which we
diagnose, treat, and understand autism.</summary>
  <author>
    <name>Brian Scassellati</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3377/Atom/cogprints-eprint-3377.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3377"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3377/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3377/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3377"/>
  <published>2004-01-13Z</published>
  <updated>2011-03-11T08:55:27Z</updated>
  <id>http://cogprints.org/id/eprint/3377</id>
  <category term="preprint" label="Preprint" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3377"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3377</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3377">
    <sword:depositedOn>2004-01-13Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Integrated 2-D Optical Flow Sensor</title>
  <summary type="xhtml">I present a new focal-plane analog VLSI sensor that estimates optical flow in two visual dimensions. The chip significantly improves previous approaches both with respect to the applied model of optical flow estimation as well as the actual hardware implementation. Its distributed computational architecture consists of an array of locally connected motion units that collectively solve for the unique optimal optical flow estimate. The novel gradient-based motion model assumes visual motion to be translational, smooth and biased. The model guarantees that the estimation problem is computationally well-posed regardless of the visual input. Model parameters can be globally adjusted, leading to a rich output behavior. Varying the smoothness strength, for example, can provide a continuous spectrum of motion estimates, ranging from normal to global optical flow. Unlike approaches that rely on the explicit matching of brightness edges in space or time, the applied gradient-based model assures spatiotemporal continuity on visual information. The non-linear coupling of the individual motion units improves the resulting optical flow estimate because it reduces spatial smoothing across large velocity differences. Extended measurements of a 30x30 array prototype sensor under real-world conditions demonstrate the validity of the model and the robustness and functionality of the implementation. </summary>
  <author>
    <name>Dr. Alan Stocker</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4146/Atom/cogprints-eprint-4146.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4146"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4146/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4146/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4146"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:52Z</updated>
  <id>http://cogprints.org/id/eprint/4146</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4146"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4146</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4146">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Artifical Immune Networks for Robot Control</title>
  <summary type="xhtml">We investigate how a robot can be provided with an architecture that would enable it to developmentally 'grow-up' and accomplish complex tasks by building on basic built-in capabilities. The paper introduces into the basic principles of AIS and presents experimental results from a real robot. To our knowledge, this is the first implementation of an AIS architecture for controlling a real mobile robot</summary>
  <author>
    <name>Juergen Rattenberger</name>
    <email/>
  </author>
  <author>
    <name>Patrick M. Poelz</name>
    <email/>
  </author>
  <author>
    <name>Erich Prem</name>
    <email/>
  </author>
  <author>
    <name>Emma Hart</name>
    <email/>
  </author>
  <author>
    <name>Andrew Webb</name>
    <email/>
  </author>
  <author>
    <name>Peter Ross</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4150/Atom/cogprints-eprint-4150.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4150"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4150/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4150/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4150"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:52Z</updated>
  <id>http://cogprints.org/id/eprint/4150</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4150"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4150</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4150">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Binding tactile and visual sensations via unique association by cross-anchoring between double-touching and self-occlusion</title>
  <summary type="xhtml">Binding is one of the most fundamental cognitive functions, how to find the correspondence of sensations between different modalities such as vision and touch. Without a priori knowledge on this correspondence, binding is regarded to be a formidable issue for a robot since it often perceives multiple physical phenomena in its different modal sensors, therefore it should correctly match the foci of attention in different modalities that may have multiple correspondences each other. We suppose that learning the multimodal representation of the body should be the first step toward binding since the morphological constraints in self-body-observation would make the binding problem tractable. The multimodal sensations are expected to be constrained in perceiving own body so as to configurate the unique parts of the multiple correspondence reflecting its morphology. In this paper, we propose a method to match the foci of attention in vision and touch through the unique association by cross-anchoring different modalities. Simple experiments show the validity of the proposed method.</summary>
  <author>
    <name>Yuichiro Yoshikawa</name>
    <email/>
  </author>
  <author>
    <name>Koh Hosoda</name>
    <email/>
  </author>
  <author>
    <name>Minoru Asada</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4067/Atom/cogprints-eprint-4067.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4067"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4067/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4067/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4067"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:50Z</updated>
  <id>http://cogprints.org/id/eprint/4067</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4067"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4067</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4067">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The Challenges of Joint Attention</title>
  <summary type="xhtml">This paper discusses the concept of joint attention and the different skills underlying its development. We argue that joint attention is much more than gaze following or simultaneous looking because it implies a shared intentional relation to the world. The current state-of-the-art in robotic and computational models of the different prerequisites of joint attention is discussed in relation with a developmental timeline drawn from results in child studies.</summary>
  <author>
    <name>Frederic Kaplan</name>
    <email/>
  </author>
  <author>
    <name>Verena Hafner</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4053/Atom/cogprints-eprint-4053.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4053"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4053/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4053/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4053"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:49Z</updated>
  <id>http://cogprints.org/id/eprint/4053</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4053"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4053</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4053">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Children, Humanoid Robots and Caregivers</title>
  <summary type="xhtml">This paper presents developmental learning on a humanoid robot from human-robot interactions. We consider in particular teaching humanoids as children during the child's Separation and Individuation developmental phase (Mahler, 1979). Cognitive development during this phase is characterized both by the child's dependence on her mother for learning while becoming awareness of her own individuality, and by self-exploration of her physical surroundings. We propose a learning framework for a humanoid robot inspired on such cognitive development.</summary>
  <author>
    <name>Artur Arsenio</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4063/Atom/cogprints-eprint-4063.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4063"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4063/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4063/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4063"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:50Z</updated>
  <id>http://cogprints.org/id/eprint/4063</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4063"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4063</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4063">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The DayOne project: how far can a robot develop in 24 hours?</title>
  <summary type="xhtml">What could a robot learn in one day? This paper describes the DayOne project, an endeavor to build an epigenetic robot that can bootstrap from a very rudimentary state to relatively sophisticated perception of objects and activities in a matter of hours. The project is inspired by the astonishingly rapidity with which many animals such as foals and lambs adapt to their surroundings on the first day of their life. While such plasticity may not be a sufficient basis for long-term cognitive development, it may be at least necessary, and share underlying infrastructure. This paper suggests that a sufficiently flexible perceptual system begins to look and act like it contains cognitive structures.</summary>
  <author>
    <name>Paul Fitzpatrick</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4057/Atom/cogprints-eprint-4057.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4057"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4057/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4057/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4057"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:50Z</updated>
  <id>http://cogprints.org/id/eprint/4057</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4057"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4057</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4057">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Developmental Learning: A Case Study in Understanding “Object Permanence”</title>
  <summary type="xhtml">The concepts of muddy environment and muddy tasks set the ground for us to understand the essence of intelligence, both artificial and natural, which further motivates the need of Developmental Learning for machines. In this paper, a biologically inspired computational model is proposed to study one of the fundamental and controversial issues in cognitive science – “Object Permanence.” This model is implemented on a robot, which enables us to examine the robot’s behavior based on perceptual development through realtime experiences. Our experimental result shows consistency with prior researches on human infants, which not only sheds light on the highly controversial issue of object permanence, but also demonstrates how biologically inspired developmental models can potentially develop intelligent machines and verify computationalmodeling that has been established in cognitive science.</summary>
  <author>
    <name>Yi Chen</name>
    <email/>
  </author>
  <author>
    <name>Juyang Weng</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4060/Atom/cogprints-eprint-4060.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4060"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4060/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4060/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4060"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:50Z</updated>
  <id>http://cogprints.org/id/eprint/4060</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4060"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4060</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4060">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Developmental Stages of Perception and Language Acquisition in a Perceptually Grounded Robot</title>
  <summary type="xhtml">The objective of this research is to develop a system for language learning based on a minimum of pre-wired language-specific functionality, that is compatible with observations of perceptual and language capabilities in the human developmental trajectory. In the proposed system, meaning (in terms of descriptions of events and spatial relations) is extracted from video images based on detection of position, motion, physical contact and their parameters. Mapping of sentence form to meaning is performed by learning grammatical constructions that are retrieved from a construction inventory based on the constellation of closed class items uniquely identifying the target sentence structure. The resulting system displays robust acquisition behavior that reproduces certain observations from developmental studies, with very modest “innate” language specificity.</summary>
  <author>
    <name>Peter Ford Dominey</name>
    <email/>
  </author>
  <author>
    <name>Jean-David Boucher</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4143/Atom/cogprints-eprint-4143.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4143"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4143/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4143/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4143"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:52Z</updated>
  <id>http://cogprints.org/id/eprint/4143</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4143"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4143</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4143">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The Effects on Visual Information in a Robot in Environments with Oriented Contours</title>
  <summary type="xhtml">For several decades experiments have been performed where animals have been reared in environments with orientationally restricted contours. The aim has been to find out what effects the visual field has on the development of the visual system in the brain. In this paper we describe similar experiments performed with a robot acting in an environment with only vertical contours and compare the results with the same robot in an ordinary office environment. Using metric projections of the informational distances between sensors it is shown that all visual sensors in the same vertical column are clustered together in the environment with only vertical contours. We also show how the informational structure of the sensors unfold when the robot moves from the environment with oriented contours to a normal environment.</summary>
  <author>
    <name>Lars Olsson</name>
    <email/>
  </author>
  <author>
    <name>Chrystopher L. Nehaniv</name>
    <email/>
  </author>
  <author>
    <name>Daniel Polani</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4059/Atom/cogprints-eprint-4059.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4059"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4059/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4059/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4059"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:50Z</updated>
  <id>http://cogprints.org/id/eprint/4059</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4059"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4059</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4059">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Embodied cognition through cultural interaction</title>
  <summary type="xhtml">In this short paper we describe a robotic setup to study the self-organization of conceptualisation and language. What distinguishes this project from others is that we envision a robot with specic cognitive capacities, but without resorting to any pre-programmed representations or conceptualisations. The key to this all is self-organization and enculturation. We report preliminary results on learning motor behaviours through imitation, and sketch how the language plays a pivoting role in constructing world representations.</summary>
  <author>
    <name>Bart De Vylder</name>
    <email/>
  </author>
  <author>
    <name>Bart Jansen</name>
    <email/>
  </author>
  <author>
    <name>Tony Belpaeme</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4062/Atom/cogprints-eprint-4062.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4062"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4062/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4062/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4062"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:50Z</updated>
  <id>http://cogprints.org/id/eprint/4062</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4062"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4062</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4062">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self</title>
  <summary type="xhtml">For a robot to be capable of development, it must be able to explore its environment and learn from its experiences. It must find (or create) opportunities to experience the unfamiliar in ways that reveal properties valid beyond the immediate context. In this paper, we develop a novel method for using the rhythm of everyday actions as a basis for identifying the characteristic appearance and sounds associated with objects, people, and the robot itself. Our approach is to identify and segment groups of signals in individual modalities (sight, hearing, and proprioception) based on their rhythmic variation, then to identify and bind causally-related groups of signals across different modalities. By including proprioception as a modality, this cross-modal binding method applies to the robot itself, and we report a series of experiments in which the robot learns about the characteristics of its own body.</summary>
  <author>
    <name>Paul Fitzpatrick</name>
    <email/>
  </author>
  <author>
    <name>Artur Arsenio</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4054/Atom/cogprints-eprint-4054.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4054"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4054/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4054/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4054"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:49Z</updated>
  <id>http://cogprints.org/id/eprint/4054</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4054"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4054</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4054">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Grounding Symbols in Perception with two Interacting Autonomous Robots</title>
  <summary type="xhtml">Grounding symbolic representations in perception is a key and difficult issue for artificial intelligence. The ”Talking Heads” experiment (Steels and Kaplan, 2002) explores an interesting coupling between grounding and social learning of language. In the first version of this experiment, two cameras were interacting in a simplified visual environment made of colored shapes on a white board and they developed a shared, grounded lexicon. We present here the beginning of a new experiment which is an extension of the original one with two autonomous robots instead of two cameras and a complex and unconstrained visual environment. We review the difficulties raised specifically by the embodiment of the agents and propose some directions to address these questions.</summary>
  <author>
    <name>Jean-Christophe Baillie</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4072/Atom/cogprints-eprint-4072.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4072"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4072/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4072/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4072"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:50Z</updated>
  <id>http://cogprints.org/id/eprint/4072</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4072"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4072</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4072">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Influencing Robot Learning Through Design and Social Interactions: A Balancing Framework</title>
  <summary type="xhtml">We present a framework for addressing a challenging trade-off between influencing the learning of a robot through design and through social interactions. We identify different kinds of influences that a designer can introduce at design time, and that an expert can introduce using social interactions, and we use these to characterise a two-dimensional design space. As well as discussing how the two sources of influence affect each other, we propose how learning performance typically varies as a result, and present some empirical findings.</summary>
  <author>
    <name>Yuval Marom</name>
    <email/>
  </author>
  <author>
    <name>Gillian Haynes</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4144/Atom/cogprints-eprint-4144.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4144"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4144/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4144/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4144"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:52Z</updated>
  <id>http://cogprints.org/id/eprint/4144</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4144"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4144</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4144">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Intelligent Adaptive Curiosity: a source of Self-Development</title>
  <summary type="xhtml">This paper presents the mechanism of Intelligent Adaptive Curiosity. This is a drive which pushes the robot towards situations in which it maximizes its learning progress. It makes the robot focus on situations which are neither too predictable nor too unpredictable. This mechanism is a source of self-development for the robot: the complexity of its activity autonomously increases. Indeed, we show that it first spends time in situations which are easy to learn, then shifts progressively its attention to situations of increasing difficulty, avoiding situations in which nothing can be learnt.</summary>
  <author>
    <name>Pierre-Yves Oudeyer</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4056/Atom/cogprints-eprint-4056.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4056"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4056/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4056/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4056"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:49Z</updated>
  <id>http://cogprints.org/id/eprint/4056</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4056"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4056</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4056">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Introduction: The Fourth International Workshop on Epigenetic Robotics</title>
  <summary type="xhtml">As in the previous editions, this workshop is trying to be a forum for multi-disciplinary research ranging from developmental psychology to neural sciences (in its widest sense) and robotics including computational studies. This is a two-fold aim of, on the one hand, understanding the brain through engineering embodied systems and, on the other hand, building artificial epigenetic systems. Epigenetic contains in its meaning the idea that we are interested in studying development through interaction with the environment. This idea entails the embodiment of the system, the situatedness in the environment, and of course a prolonged period of postnatal development when this interaction can actually take place. This is still a relatively new endeavor although the seeds of the developmental robotics community were already in the air since the nineties (Berthouze and Kuniyoshi, 1998; Metta et al., 1999; Brooks et al., 1999; Breazeal, 2000; Kozima and Zlatev, 2000). A few had the intuition – see Lungarella et al. (2003) for a comprehensive review – that, intelligence could not be possibly engineered simply by copying systems that are “ready made” but rather that the development of the system fills a major role. This integration of disciplines raises the important issue of learning on the multiple scales of developmental time, that is, how to build systems that eventually can learn in any environment rather than program them for a specific environment. On the other hand, the hope is that robotics might become a new tool for brain science similarly to what simulation and modeling have become for the study of the motor system. Our community is still pretty much evolving and “under construction” and for this reason, we tried to encourage submissions from the psychology community. Additionally, we invited four neuroscientists and no roboticists for the keynote lectures. We received a record number of submissions (more than 50), and given the overall size and duration of the workshop together with our desire to maintain a single-track format, we had to be more selective than ever in the review process (a 20% acceptance rate on full papers). This is, if not an index of quality, at least an index of the interest that gravitates around this still new discipline.</summary>
  <author>
    <name>Luc Berthouze</name>
    <email/>
  </author>
  <author>
    <name>Giorgio Metta</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4148/Atom/cogprints-eprint-4148.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4148"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4148/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4148/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4148"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:52Z</updated>
  <id>http://cogprints.org/id/eprint/4148</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4148"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4148</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4148">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">A Multimodal Hierarchial Approach to Robot Learning by Imitation</title>
  <summary type="xhtml">In this paper we propose an approach to robot learning by imitation that uses the multimodal inputs of language, vision and motor. In our approach a student robot learns from a teacher robot how to perform three separate behaviours based on these inputs. We considered two neural architectures for performing this robot learning. First, a one-step hierarchial architecture trained with two different learning approaches either based on Kohonen's self-organising map or based on the Helmholtz machine turns out to be inefficient or not capable of performing differentiated behavior. In response we produced a hierarchial architecture that combines both learning approaches to overcome these problems. In doing so the proposed robot system models specific aspects of learning using concepts of the mirror neuron system (Rizzolatti and Arbib, 1998) with regards to demonstration learning.</summary>
  <author>
    <name>Cornelius Weber</name>
    <email/>
  </author>
  <author>
    <name>Mark Elshaw</name>
    <email/>
  </author>
  <author>
    <name>Alex Zochios</name>
    <email/>
  </author>
  <author>
    <name>Stefan Wermter</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4061/Atom/cogprints-eprint-4061.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4061"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4061/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4061/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4061"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:50Z</updated>
  <id>http://cogprints.org/id/eprint/4061</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4061"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4061</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4061">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">An Ontogenetic Model of Perceptual Organization for a developmental Robot</title>
  <summary type="xhtml">This paper presents an ontogenetic model of self-organization for robotic intermediary vision. Two mechanisms are under concern. First, the development of low-level local feature detectors that perform a piecewise categorization of the sensory signal. Second, the hierarchical grouping of these local features in a holistic perception. While the grouping mechanism is expressed as a classical agglomerative clustering, underlying similarity measures are not pre-given but developed from the signal statistics.</summary>
  <author>
    <name>Remi Driancourt</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4071/Atom/cogprints-eprint-4071.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4071"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4071/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4071/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4071"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:50Z</updated>
  <id>http://cogprints.org/id/eprint/4071</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4071"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4071</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4071">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Protosymbols that integrate recognition and response</title>
  <summary type="xhtml">We explore two controversial hypotheses through robotic implementation: (1) Processes involved in recognition and response are tightly coupled both in their operation and epigenesis; and (2) processes involved in symbol emergence should respect the integrity of recognition and response while exploiting the periodicity of biological motion. To that end, this paper proposes a method of recognizing and generating motion patterns based on nonlinear principal component neural networks that are constrained to model both periodic and transitional movements. The method is evaluated by an examination of its ability to segment and generalize different kinds of soccer playing activity during a RoboCup match.</summary>
  <author>
    <name>Karl F. MacDorman</name>
    <email/>
  </author>
  <author>
    <name>Rawichote Chalodhorn</name>
    <email/>
  </author>
  <author>
    <name>Hiroshi Ishiguro</name>
    <email/>
  </author>
  <author>
    <name>Minoru Asada</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4058/Atom/cogprints-eprint-4058.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4058"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4058/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4058/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4058"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:50Z</updated>
  <id>http://cogprints.org/id/eprint/4058</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4058"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4058</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4058">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Sharing Meaning with Machines</title>
  <summary type="xhtml">Communication can be described as the act of sharing meaning, so if humans and machines are to communicate, how is this to be achieved between such different creatures? This paper examines what else the communicators need to share in order to share meaning, including perception, categorisation, attention, sociability and consciousness. It compares and takes inspiration from communications with others with different perception and categorisation, including the deaf-blind, the autistic and animals.</summary>
  <author>
    <name>Claire D’Este</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4064/Atom/cogprints-eprint-4064.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4064"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4064/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4064/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4064"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:50Z</updated>
  <id>http://cogprints.org/id/eprint/4064</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4064"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4064</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4064">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Simulating development in a real robot: on the concurrent increase of sensory, motor, and neural complexity</title>
  <summary type="xhtml">We present a quantitative investigation on the effects of a discrete developmental progression on the acquisition of a foveation behavior by a robotic hand-arm-eyes system. Development is simulated by (a) increasing the resolution of visual and tactile systems, (b) freezing and freeing mechanical degrees of freedom, and (c) adding neuronal units to the neural control architecture. Our experimental results show that a system starting with a low-resolution sensory system, a low precision motor system, and a low complexity neural structure, learns faster that a system which is more complex at the beginning.</summary>
  <author>
    <name>Gabriel Gomez</name>
    <email/>
  </author>
  <author>
    <name>Max Lungarella</name>
    <email/>
  </author>
  <author>
    <name>Peter Eggenberger Hotz</name>
    <email/>
  </author>
  <author>
    <name>Kojiro Matsushita</name>
    <email/>
  </author>
  <author>
    <name>Rolf Pfeifer</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4149/Atom/cogprints-eprint-4149.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4149"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4149/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4149/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4149"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:52Z</updated>
  <id>http://cogprints.org/id/eprint/4149</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4149"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4149</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4149">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Synchronisation and Differentiation: Two Stages of Coordinative Structure</title>
  <summary type="xhtml">While motor skill acquisition process is regarded as development of coordination, typically regarded as synchronisation among joint movements, we found another phenomenon which we call differentiation as a consequence of synchronisation. The synchronised movement established is decomposed into several sections or modulated to be executed on different timings without breaking the coordination among them, resulting in the gain of efficiency or flexibility. In the acquisition of skills, the coordinative structure thus goes through two stages: synchronisation and differentiation. We verify in this paper our observation through our experiments and dynamical analysis of the kneading of ceramic art and playing the shaker in samba.</summary>
  <author>
    <name>Tomoyuki Yamamoto</name>
    <email/>
  </author>
  <author>
    <name>Tsutomu Fujinami</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3744/Atom/cogprints-eprint-3744.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3744"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3744/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3744/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3744"/>
  <published>2004-08-10Z</published>
  <updated>2011-03-11T08:55:39Z</updated>
  <id>http://cogprints.org/id/eprint/3744</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3744"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3744</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3744">
    <sword:depositedOn>2004-08-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Teaching Bayesian Behaviours to Video Game Characters</title>
  <summary type="xhtml">This article explores an application of Bayesian programming to behaviours for synthetic video games characters. We address the problem of real-time reactive selection of elementary behaviours for an agent playing a first person shooter game. We show how Bayesian programming can lead to condensed and easier formalisation of finite state machine-like behaviour selection, and lend itself to learning by imitation, in a fully transparent way for the player.</summary>
  <author>
    <name>R Le Hy</name>
    <email/>
  </author>
  <author>
    <name>A Arrigoni</name>
    <email/>
  </author>
  <author>
    <name>Dr P Bessiere</name>
    <email/>
  </author>
  <author>
    <name>Dr O Lebeltel</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4147/Atom/cogprints-eprint-4147.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4147"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4147/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4147/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4147"/>
  <published>2005-04-14Z</published>
  <updated>2011-03-11T08:55:52Z</updated>
  <id>http://cogprints.org/id/eprint/4147</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4147"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4147</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4147">
    <sword:depositedOn>2005-04-14Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Toward a Behavior-Grounded Representation of Tool Affordances</title>
  <summary type="xhtml"> </summary>
  <author>
    <name>Alexander Stoytchev</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3306/Atom/cogprints-eprint-3306.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3306"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3306/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3306/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3306"/>
  <published>2003-12-13Z</published>
  <updated>2011-03-11T08:55:24Z</updated>
  <id>http://cogprints.org/id/eprint/3306</id>
  <category term="thesis" label="Thesis" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3306"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3306</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3306">
    <sword:depositedOn>2003-12-13Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Evolution of the layers in a subsumption architecture robot controller</title>
  <summary type="xhtml">An approach to robotics called layered evolution and merging features from the subsumption architecture into evolutionary robotics is presented, its advantages and its relevance for science and engineering are discussed. This approach is used to construct a layered controller for a simulated robot that learns which light source to approach in an environment with obstacles. The evolvability and performance of layered evolution on this task is compared to (standard) monolithic evolution, incremental and modularised evolution. To test the optimality of the evolved solutions the evolved controller is merged back into a single network. On the grounds of the test results, it is argued that layered evolution provides a superior approach for many tasks, and future research projects involving this approach are suggested.</summary>
  <author>
    <name>Mr Julian Togelius</name>
    <email>4535</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3024/Atom/cogprints-eprint-3024.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3024"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3024/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3024/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3024"/>
  <published>2003-06-19Z</published>
  <updated>2011-03-11T08:55:18Z</updated>
  <id>http://cogprints.org/id/eprint/3024</id>
  <category term="preprint" label="Preprint" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3024"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3024</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3024">
    <sword:depositedOn>2003-06-19Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">There is no Concrete (or: Living Within One's Means)</title>
  <summary type="xhtml">We are accustomed to thinking that a primrose is "concrete" and a prime number is "abstract," that "roundness" is more abstract than "round," and that "property" is more abstract than "roundness." In reality, the relation between "abstract" and "concrete" is more like the (non)relation between "abstract" and "concave," "concrete" being a sensory term [about what something feels like] and "abstract" being a functional term (about what the sensorimotor system is doing with its input in order to produce its output): Feelings and things are correlated, but otherwise incommensurable.

Everything that any sensorimotor system such as ourselves manages to categorize successfully is based on abstracting sensorimotor "affordances" (invariant features). The rest is merely a question of what inputs we can and do categorize, and what we must abstract from the particulars of each sensorimotor interaction in order to be able to categorize them correctly. To categorize, in other words, is to abstract. And not to categorize is merely to experience.

Borges's Funes the Memorious, with his infinite, infallible rote memory, is a fictional hint at what it would be like not to be able to categorize, not to be able to selectively forget and ignore most of our input by abstracting only its reliably recurrent invariants. But a sensorimotor system like Funes would not really be viable, for if something along those lines did exist, it could not categorize recurrent objects, events or states, hence it could have no language, private or public, and could at most only feel, not function adaptively (hence survive).

Luria's "S" in "The Mind of a Mnemonist" is a real-life approximation whose difficulties in conceptualizing were directly proportional to his difficulties in selectively forgetting and ignoring.

Watanabe's "Ugly Duckling Theorem" shows how, if we did not selectively weight some properties more heavily than others, everything would be equally (and infinitely and indifferently) similar to everything else.

Miller's "Magical Number Seven Plus or Minus Two" shows that there are (and must be) limitations on our capacity to process and remember information, both in our capacity to discriminate relatively (detect sameness/difference, degree-of-similarity) and in our capacity to discriminate absolutely (identify, categorize, name),

The phenomenon of categorical perception shows how selective feature-detection puts a Whorfian "warp" on our feelings of similarity in the service of categorization, compressing within-category similarities and expanding between-category differences by abstracting and selectively filtering inputs through their invariant features, thereby allowing us to sort and name things reliably.

Language does allow us to acquire categories indirectly through symbolic description ("hearsay," definition) instead of just through direct sensorimotor trial-and-error experience, but to do so, all the categories named and used in the description must be recursively grounded in direct sensorimotor invariants. Language is largely a way to ground new categories by recombining already grounded ones, often by making their implicit invariant features into explicit categories too.

If prime numbers differ from primroses, it is hence only in the degree to which they happen to be indirect, explicit, language-mediated categories. Like everything else, they are recursively grounded in sensorimotor invariants. The democracy of things is that, for sensorimotor systems like ourselves, all things are just absolute discriminables: they number among those categories that our sensorimotor interactions can potentially afford, no more, no less. A primrose affords dicotyledonousness as reliably (if not as surely) as a numerosity of 6 (e.g., 6 primroses) affords factoring (whereas 7 does not).</summary>
  <author>
    <name>Stevan Harnad</name>
    <email>63</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3346/Atom/cogprints-eprint-3346.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3346"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3346/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3346/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3346"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:26Z</updated>
  <id>http://cogprints.org/id/eprint/3346</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3346"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3346</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3346">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Action Oriented Adaptive Language Games</title>
  <summary type="xhtml"> </summary>
  <author>
    <name>Robert Clowes</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3054/Atom/cogprints-eprint-3054.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3054"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3054/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3054/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3054"/>
  <published>2003-07-16Z</published>
  <updated>2011-03-11T08:55:18Z</updated>
  <id>http://cogprints.org/id/eprint/3054</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3054"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3054</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3054">
    <sword:depositedOn>2003-07-16Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Anchoring of semiotic symbols</title>
  <summary type="xhtml">This paper presents arguments for approaching the anchoring problem using {\em semiotic symbols}. Semiotic symbols are defined by a triadic relation between forms, meanings and referents, thus having an implicit relation to the real world.Anchors are formed between these three elements rather than between `traditional' symbols and sensory images. This allows an optimization between the form (i.e. the `traditional' symbol) and the referent. A robotic experiment based on adaptive language games illustrates how the anchoring of semiotic symbols can be achieved in a bottom-up fashion. The paper concludes that applying semiotic symbols is a potentially valuable approach toward anchoring.</summary>
  <author>
    <name>Paul Vogt</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3060/Atom/cogprints-eprint-3060.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3060"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3060/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3060/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3060"/>
  <published>2003-07-16Z</published>
  <updated>2011-03-11T08:55:19Z</updated>
  <id>http://cogprints.org/id/eprint/3060</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3060"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3060</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3060">
    <sword:depositedOn>2003-07-16Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Anchoring symbols to sensorimotor control</title>
  <summary type="xhtml">This paper investigates how robots may emerge a lexicon to communicate complex meanings about actions such as `I am going to the red target' using simple (one-word) utterances. The main issue of the paper concerns the way these complex meanings represent the actions that are performed. It is argued that the meaning of these utterances may be represented without the need for categorising a complex flow of sensorimotor data. To illustrate the point, a simulation is presented in which robots develop such a communication system. The paper concludes by confirming that it is well possible to construct such a lexicon once robots have a number of basic sensorimotor skills available.</summary>
  <author>
    <name>Paul Vogt</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3752/Atom/cogprints-eprint-3752.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3752"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3752/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3752/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3752"/>
  <published>2004-08-10Z</published>
  <updated>2011-03-11T08:55:39Z</updated>
  <id>http://cogprints.org/id/eprint/3752</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3752"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3752</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3752">
    <sword:depositedOn>2004-08-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Approximate Discrete Probability Distribution Representation using a Multi-ResolutionBinary Tree</title>
  <summary type="xhtml">Computing and storing probabilities is a hard problem as soon as one has to deal with complex distributions over multiples random variables. The problem of efficient representation of probability distributions is central in term of computational efficiency in the field of probabilistic reasoning. The main problem arises when dealing with joint probability distributions over a set of random variables: they are always represented using huge probability arrays. In this paper, a new method based on a binary-tree representation
is introduced in order to store efficiently very large joint distributions. Our approach approximates any multidimensional joint distributions using an adaptive discretization of the space. We make the assumption that the lower is the probability mass of a particular region of feature space, the larger is the discretization step. This assumption leads to a very optimized representation in term of time and memory. The other advantages of our approach are the ability to refine dynamically the distribution every time it is needed leading to a more accurate representation of the probability
distribution and to an anytime representation of the distribution.</summary>
  <author>
    <name>Dr D Bellot</name>
    <email/>
  </author>
  <author>
    <name>Dr P Bessiere</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3754/Atom/cogprints-eprint-3754.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3754"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3754/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3754/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3754"/>
  <published>2004-08-10Z</published>
  <updated>2011-03-11T08:55:39Z</updated>
  <id>http://cogprints.org/id/eprint/3754</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3754"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3754</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3754">
    <sword:depositedOn>2004-08-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Bayesian Programming Multi-Target Tracking: an Automotive Application</title>
  <summary type="xhtml">A prerequisite to the design of future Advanced
Driver Assistance Systems for cars is a sensing system
providing all the information required for high-level driving
assistance tasks. In particular, target tracking is still
challenging in urban trafc situations, because of the large
number of rapidly maneuvering targets. The goal of this
paper is to present an original way to perform target position
and velocity, based on the occupancy grid framework. The
main interest of this method is to avoid the decision problem
of classical multi-target tracking algorithms. Obtained
occupancy grids are combined with danger estimation to
perform an elementary task of obstacle avoidance with an
electric car.</summary>
  <author>
    <name>C Coue</name>
    <email/>
  </author>
  <author>
    <name>C Pradalier</name>
    <email/>
  </author>
  <author>
    <name>C Laugier</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3334/Atom/cogprints-eprint-3334.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3334"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3334/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3334/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3334"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:25Z</updated>
  <id>http://cogprints.org/id/eprint/3334</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3334"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3334</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3334">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Beyond Gazing, Pointing, and Reaching: A Survey of Developmental Robotics</title>
  <summary type="xhtml">Developmental robotics is an emerging field located
at the intersection of developmental psychology
and robotics, that has lately attracted
quite some attention. This paper gives a survey of
a variety of research projects dealing with or inspired
by developmental issues, and outlines possible
future directions.</summary>
  <author>
    <name>Max Lungarella</name>
    <email/>
  </author>
  <author>
    <name>Giogio Metta</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2460/Atom/cogprints-eprint-2460.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2460"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2460/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2460/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2460"/>
  <published>2002-09-15Z</published>
  <updated>2011-03-11T08:55:00Z</updated>
  <id>http://cogprints.org/id/eprint/2460</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2460"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2460</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2460">
    <sword:depositedOn>2002-09-15Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Can a machine be conscious? How?
</title>
  <summary type="xhtml">A "machine" is any causal physical system, hence we are machines, hence machines can be conscious. The question is: which kinds of machines can be conscious? Chances are that robots that can pass the Turing Test -- completely indistinguishable from us in their behavioral capacities -- can be conscious (i.e. feel), but we can never be sure (because of the "other-minds" problem). And we can never know HOW they have minds, because of the "mind/body" problem. We can only know how they pass the Turing Test, but not how, why or whether that makes them feel.</summary>
  <author>
    <name>Stevan Harnad</name>
    <email>63</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/5330/Atom/cogprints-eprint-5330.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/5330"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/5330/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/5330/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/5330"/>
  <published>2006-12-22Z</published>
  <updated>2015-11-19T23:48:54Z</updated>
  <id>http://cogprints.org/id/eprint/5330</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/5330"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/5330</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/5330">
    <sword:depositedOn>2006-12-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Can a machine be conscious? How?&#13;
</title>
  <summary type="xhtml">A "machine" is any causal physical system, hence we are machines, hence machines can be conscious. The question is: which kinds of machines can be conscious? Chances are that robots that can pass the Turing Test -- completely indistinguishable from us in their behavioral capacities -- can be conscious (i.e. feel), but we can never be sure (because of the "other-minds" problem). And we can never know HOW they have minds, because of the "mind/body" problem. We can only know how they pass the Turing Test, but not how, why or whether that makes them feel.</summary>
  <author>
    <name>Stevan Harnad</name>
    <email>63</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3336/Atom/cogprints-eprint-3336.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3336"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3336/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3336/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3336"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:25Z</updated>
  <id>http://cogprints.org/id/eprint/3336</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3336"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3336</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3336">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Collaboration Development through Interactive Learning between Human and Robot</title>
  <summary type="xhtml">In this paper, we investigated interactive learning between human subjects and robot experimentally, and its essential characteristics are examined using the dynamical systems approach. Our research concentrated on the navigation system of a specially developed humanoid robot called Robovie and seven human subjects whose eyes were covered, making them dependent on the robot for directions. We compared the usual feed-forward neural network (FFNN) without recursive connections and the recurrent neural network (RNN). Although the performances obtained with both the RNN and the FFNN improved in the early stages of learning, as the subject changed the operation by learning on its own, all performances gradually became unstable and failed. Results of a questionnaire given to the subjects confirmed that the FFNN gives better mental impressions, especially from the aspect of operability. When the robot used a consolidation-learning algorithm using the rehearsal outputs of the RNN, the performance improved even when interactive learning continued for a long time. The questionnaire results then also confirmed that the subject's mental impressions of the RNN improved significantly. The dynamical systems analysis of RNNs support these differences and also showed that the collaboration scheme was developed dynamically along with succeeding phase transitions.</summary>
  <author>
    <name>Tetsuya Ogata</name>
    <email/>
  </author>
  <author>
    <name>Noritaka Masago</name>
    <email/>
  </author>
  <author>
    <name>Shigeki Sugano</name>
    <email/>
  </author>
  <author>
    <name>Jun Tani</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3261/Atom/cogprints-eprint-3261.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3261"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3261/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3261/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3261"/>
  <published>2003-11-03Z</published>
  <updated>2011-03-11T08:55:23Z</updated>
  <id>http://cogprints.org/id/eprint/3261</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3261"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3261</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3261">
    <sword:depositedOn>2003-11-03Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Compact Integrated Transconductance Amplifier Circuit for Temporal Differentiation</title>
  <summary type="xhtml">A compact integrated CMOS circuit for temporal differentiation is presented. It consists of a high-gain inverting amplifier, an active non-linear transconductance and a capacitor and requires only 4 transistors in its minimal configuration.The circuit provides two rectified current outputs that are proportional to the temporal derivative of the input voltage signal. Besides the compactness of its design, the presented circuit is not dependent on the DC-value of the input signal, as compared with known integrated differentiator circuits. Measured chip results show that the circuit operates on a large input frequency range for which it provides nearideal temporal differentiation. The circuit is particularly suited for focal-plane implementations of gradient-based visual motion systems.</summary>
  <author>
    <name>Dr. Alan Stocker</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3352/Atom/cogprints-eprint-3352.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3352"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3352/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3352/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3352"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:26Z</updated>
  <id>http://cogprints.org/id/eprint/3352</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3352"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3352</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3352">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Concept Acquisition Using Isomap on Sensorimotor Experiences of a Mobiole Robot</title>
  <summary type="xhtml">We present results about the application of a novel method for multi dimensional scaling (Isomap) for concept acquisition in mobile robotics. The aim of this work is to develop a general architecture for Symbol Anchoring in the context of research to enable artefacts to grow-up. We describe Isomap functionality, results of using it in a real robot and breifly discuss implications of using this technique for concept acquisition in the mobile robot domain.</summary>
  <author>
    <name>Patrick M. Poelz</name>
    <email/>
  </author>
  <author>
    <name>Erich Prem</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3345/Atom/cogprints-eprint-3345.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3345"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3345/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3345/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3345"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:26Z</updated>
  <id>http://cogprints.org/id/eprint/3345</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3345"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3345</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3345">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Conceptual Spaces and Robotic Emotions</title>
  <summary type="xhtml">In recent years, there has been a growing interest
in modelling emotional responses inside the
perception—action loop of an autonomous robot.
One of the motivations of this trend is that an emotional
system could introduce complex decision making
capabilities in robots in a faster and more flexible
way than symbolic deliberative architectures.
However, recent proposals in literature model
emotions at a very low level (Arkin et al., 2003,
Murphy et al., 2002). Briefly, a robot emotional
state is simply associated with suitable parameters of
the reactive behaviors. Instead, emotions may have
an important role at a higher, conceptual level of
reasoning of the robot.
It is claimed that the emotional states of an
agent may be related with its internal motivations
(Balkenius, 1995). For example, an agent has a
pleasure response when its motivations are well satisfied.
More in details, a difference is usually made
between primary and higher-order emotions. Primary
emotions are related with the immediate perceptions
and motivations of the agent. They can be
hardwired or, if learned, they are difficultly forgotten.
Higher-order emotions are instead related with
the long—term motivations of the agent; in general
they are learned during the operation tasks.
In the proposed system, both primary and higher-order
robot emotions are represented in terms of
a conceptual space (Gardenfors, 2000). The system
has been implemented in the autonomous robot operating
at the Robotics Laboratory of the University of
Palermo (a RWI B21 equipped with laser and stereo
head). The task of the robot is to offer guided tours
in the Museum of Electrical Equipments at the Department
of Electrical Engineering.</summary>
  <author>
    <name>Antonio Chella</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3342/Atom/cogprints-eprint-3342.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3342"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3342/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3342/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3342"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:25Z</updated>
  <id>http://cogprints.org/id/eprint/3342</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3342"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3342</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3342">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Conjunctive Visual and Auditory Development via Real-Time Dialogue</title>
  <summary type="xhtml">Human developmental learning is capable of
dealing with the dynamic visual world, speech-based
dialogue, and their complex real-time association.
However, the architecture that realizes
this for robotic cognitive development has
not been reported in the past. This paper takes
up this challenge. The proposed architecture does
not require a strict coupling between visual and
auditory stimuli. Two major operations contribute
to the “abstraction” process: multiscale temporal
priming and high-dimensional numeric abstraction
through internal responses with reduced variance.
As a basic principle of developmental learning,
the programmer does not know the nature
of the world events at the time of programming
and, thus, hand-designed task-specific representation
is not possible. We successfully tested the
architecture on the SAIL robot under an unprecedented
challenging multimodal interaction mode:
use real-time speech dialogue as a teaching source
for simultaneous and incremental visual learning
and language acquisition, while the robot is viewing
a dynamic world that contains a rotating object
to which the dialogue is referring.</summary>
  <author>
    <name>Yilu Zhang</name>
    <email/>
  </author>
  <author>
    <name>Juyang Weng</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3341/Atom/cogprints-eprint-3341.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3341"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3341/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3341/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3341"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:25Z</updated>
  <id>http://cogprints.org/id/eprint/3341</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3341"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3341</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3341">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">A Constructive Model of Mother-Infant Interaction towards Infant’s Vowel Articulation</title>
  <summary type="xhtml">Human infants seem to develop to acquire
common phonemes to adults without the capability
to articulate or any explicit knowledge.
To understand such unrevealed human
cognitive development, building a robot
which reproduces such a developmental process
seems effective. It will also contribute to
a design principle for a robot that can communicate
with human beings. This paper hypothesizes
that the caregiver’s parrotry to the
coo of the robot plays an important role in the
phoneme acquisition process based on the implication
from behavioral studies, and propose
a constructive model for it. We validate the
proposed model by examining whether a real
robot can acquire Japanese vowels through interactions
with its caregiver.</summary>
  <author>
    <name>Yuichiro Yoshikawa</name>
    <email/>
  </author>
  <author>
    <name>Junpei Koga</name>
    <email/>
  </author>
  <author>
    <name>Minoru Asada</name>
    <email/>
  </author>
  <author>
    <name>Koh Hosoda</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3354/Atom/cogprints-eprint-3354.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3354"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3354/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3354/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3354"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:26Z</updated>
  <id>http://cogprints.org/id/eprint/3354</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3354"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3354</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3354">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Development and Extension of the Robot Body Schema</title>
  <summary type="xhtml"> </summary>
  <author>
    <name>Alexander Stoytchev</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3343/Atom/cogprints-eprint-3343.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3343"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3343/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3343/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3343"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:26Z</updated>
  <id>http://cogprints.org/id/eprint/3343</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3343"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3343</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3343">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">A Developmental Approach for low-level Imitations</title>
  <summary type="xhtml">Historically, a lot of authors in psychology and in
robotics tend to separate "true imitation" and its
related high-level mechanisms which seem to be exclusive to human adult, from low-level imitations or
"mimicries" observed on babies or primates. Closely,
classical researches suppose that an imitative artificial system must be able to build a model of
the demonstrator's geometry, in order to reproduce finely the movements on each joints. Conversely, we
will advocate that if imitation is viewed as a part of a
developmental course, then (1) an artificial developing system does not need to build any internal model
of the other, to perform real-time and low-level imitations of human movements despite the related correspondence problem between man and robot and,
(2) a simple sensory-motor loop could be at the basis
of multiples heterogeneous imitative behaviors often
explained in the literature by different models.</summary>
  <author>
    <name>Pierre Andry</name>
    <email/>
  </author>
  <author>
    <name>Philippe Gaussier</name>
    <email/>
  </author>
  <author>
    <name>Jacqueline Nadel</name>
    <email/>
  </author>
  <author>
    <name>Michele Courant</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3366/Atom/cogprints-eprint-3366.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3366"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3366/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3366/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3366"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:27Z</updated>
  <id>http://cogprints.org/id/eprint/3366</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3366"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3366</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3366">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">A Developmental Organization for Robot Behavior</title>
  <summary type="xhtml">This paper focuses on exploring how learning and development can be structured in synthetic (robot) systems. We present a developmental assembler for constructing reusable and temporally extended actions in a sequence. The discussion adopts the traditions
of dynamic pattern theory in which behavior
is an artifact of coupled dynamical systems
with a number of controllable degrees of freedom. In our model, the events that delineate
control decisions are derived from the pattern
of (dis)equilibria on a working subset of sensorimotor policies. We show how this architecture can be used to accomplish sequential
knowledge gathering and representation tasks
and provide examples of the kind of developmental milestones that this approach has
already produced in our lab.</summary>
  <author>
    <name>Roderic A. Grupen</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3330/Atom/cogprints-eprint-3330.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3330"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3330/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3330/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3330"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:25Z</updated>
  <id>http://cogprints.org/id/eprint/3330</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3330"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3330</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3330">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">A Dynamical Analysis of Kneading Using a Motion Capture Device</title>
  <summary type="xhtml">Physical skills such as playing the musical instrument are hard to learn and take long time
to master. To investigate what makes physical
skills so dificult to learn and how we can evaluate the level of skills, we examined the kneading
in ceramic art, an action to prepare the clay for
shaping and studied the physical movements of
both the learners and experts.
Kneading is an appropriate sample of physical
skill for studying the body movement because all
the parts of body need to be coordinated to accomplish the task. The task is not hopelessly difficult for the complete novices to follow the instruction although the end result is not satisfactory.
It normally takes about three years to master the
kneading skill. It is also relatively easy to judge
how well the subjects accomplished the task by
observing the shape of the clay.
After careful examination of the movement using video tapes, we employed a motion capture
device to collect the data of movement from an
expert, an experienced person, and three novices.
We discovered that the expert elegantly splits his
body into two parts, torso and arms, and effectively coordinates these two parts while kneading
the clay.</summary>
  <author>
    <name>Mamiko Abe</name>
    <email/>
  </author>
  <author>
    <name>Tomoyuki Yamamoto</name>
    <email/>
  </author>
  <author>
    <name>Tsutomu Fujinami</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3759/Atom/cogprints-eprint-3759.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3759"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3759/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3759/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3759"/>
  <published>2004-08-10Z</published>
  <updated>2011-03-11T08:55:40Z</updated>
  <id>http://cogprints.org/id/eprint/3759</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3759"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3759</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3759">
    <sword:depositedOn>2004-08-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Expressing Bayesian Fusion as a Product of Distributions: Application in Robotics</title>
  <summary type="xhtml">More and more fields of applied computer
science involve fusion of multiple data sources, such as sensor
readings or model decision. However incompleteness of the
models prevent the programmer from having an absolute
precision over their variables. Therefore bayesian framework
can be adequate for such a process as it allows handling of
uncertainty.We will be interested in the ability to express any
fusion process as a product, for it can lead to reduction of
complexity in time and space. We study in this paper various
fusion schemes and propose to add a consistency variable to
justify the use of a product to compute distribution over the
fused variable. We will then show application of this new
fusion process to localization of a mobile robot and obstacle
avoidance.</summary>
  <author>
    <name>C Pradalier</name>
    <email/>
  </author>
  <author>
    <name>F Colas</name>
    <email/>
  </author>
  <author>
    <name>P Bessiere</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3758/Atom/cogprints-eprint-3758.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3758"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3758/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3758/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3758"/>
  <published>2004-08-10Z</published>
  <updated>2011-03-11T08:55:40Z</updated>
  <id>http://cogprints.org/id/eprint/3758</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3758"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3758</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3758">
    <sword:depositedOn>2004-08-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Expressing Bayesian Fusion as a Product of Distributions: Application to Randomized Hough Transform</title>
  <summary type="xhtml">Data fusion is a common issue of mobile robotics, computer assisted
medical diagnosis or behavioral control of simulated character for instance. However
data sources are often noisy, opinion for experts are not known with absolute
precision, and motor commands do not act in the same exact manner on the environment.
In these cases, classic logic fails to manage efficiently the fusion process.
Confronting different knowledge in an uncertain environment can therefore be adequately
formalized in the bayesian framework.
Besides, bayesian fusion can be expensive in terms of memory usage and processing
time. This paper precisely aims at expressing any bayesian fusion process as a
product of probability distributions in order to reduce its complexity. We first study
both direct and inverse fusion schemes. We show that contrary to direct models,
inverse local models need a specific prior in order to allow the fusion to be computed
as a product. We therefore propose to add a consistency variable to each local
model and we show that these additional variables allow the use of a product of the
local distributions in order to compute the global probability distribution over the
fused variable. Finally, we take the example of the Randomized Hough Transform.
We rewrite it in the bayesian framework, considering that it is a fusion process
to extract lines from couples of dots in a picture. As expected, we can find back
the expression of the Randomized Hough Transform from the literature with the
appropriate assumptions.</summary>
  <author>
    <name>C Pradalier</name>
    <email/>
  </author>
  <author>
    <name>F Colas</name>
    <email/>
  </author>
  <author>
    <name>P Bessiere</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3664/Atom/cogprints-eprint-3664.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3664"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3664/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3664/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3664"/>
  <published>2004-06-05Z</published>
  <updated>2011-03-11T08:55:37Z</updated>
  <id>http://cogprints.org/id/eprint/3664</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3664"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3664</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3664">
    <sword:depositedOn>2004-06-05Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Grounded Concept Development Using Introspective Atoms</title>
  <summary type="xhtml">In this paper we present a system that uses its underlying
physiology, a hierarchical memory and a collection of memory
management algorithms to learn concepts as cases and to
build higher level concepts from experiences represented as
sequences of atoms. Using a memory structure that requires
all base memories to be grounded in introspective atoms, the
system builds a set of grounded concepts that must all be
formed from and applied to this same set of atoms. All interaction the system has with its environment must be represented by the system itself and therefore, given a complete ability to perceive its own physiological and mental processes,can be modeled and recreated.</summary>
  <author>
    <name>Eric Berkowitz</name>
    <email>eberkowi</email>
  </author>
  <author>
    <name>Mastenbrook Brian</name>
    <email></email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3059/Atom/cogprints-eprint-3059.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3059"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3059/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3059/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3059"/>
  <published>2003-07-16Z</published>
  <updated>2011-03-11T08:55:19Z</updated>
  <id>http://cogprints.org/id/eprint/3059</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3059"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3059</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3059">
    <sword:depositedOn>2003-07-16Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Grounded lexicon formation without explicit reference transfer: who's talking to who?</title>
  <summary type="xhtml">This paper presents a first investigation regarding lexicon grounding and evolution under an iterated learning regime without an explicit transfer of reference. In the original iterated learning framework, a population contains adult speakers and learning hearers. In this paper I investigate the effects of allowing both adults and learners to take up the role of speakers and hearers with varying probabilities. The results indicate that when adults and learners can be selected as speakers and hearers, their lexicons become more similar but at the cost of reduced success in communication.</summary>
  <author>
    <name>Paul Vogt</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3335/Atom/cogprints-eprint-3335.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3335"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3335/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3335/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3335"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:25Z</updated>
  <id>http://cogprints.org/id/eprint/3335</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3335"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3335</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3335">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">How does an infant acquire the ability of joint attention?: A Constructive Approach</title>
  <summary type="xhtml">This study argues how a human infant acquires
the ability of joint attention through
interactions with its caregiver from the viewpoint
of a constructive approach. This paper
presents a constructive model by which a
robot acquires a sensorimotor coordination for
joint attention based on visual attention and
learning with self-evaluation. Since visual attention
does not always correspond to joint attention,
the robot may have incorrect learning
situations for joint attention as well as correct
ones. However, the robot is expected to statistically
lose the data of the incorrect ones
as outliers through the learning, and consequently
acquires the appropriate sensorimotor
coordination for joint attention even if the
environment is not controlled nor the caregiver
provides any task evaluation. The experimental
results suggest that the proposed
model could explain the developmental mechanism
of the infant’s joint attention because
the learning process of the robot’s joint attention
can be regarded as equivalent to the
developmental process of the infant’s one.</summary>
  <author>
    <name>Yukie Nagai</name>
    <email/>
  </author>
  <author>
    <name>Koh Hosoda</name>
    <email/>
  </author>
  <author>
    <name>Minoru Asada</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3331/Atom/cogprints-eprint-3331.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3331"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3331/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3331/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3331"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:25Z</updated>
  <id>http://cogprints.org/id/eprint/3331</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3331"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3331</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3331">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Information Theory and Representation in Associative Word Learning</title>
  <summary type="xhtml">A significant portion of early language learning
can be viewed as an associative learning
problem. We investigate the use of associative
language learning based on the principle that
words convey Shannon information about the
environment. We discuss the shortcomings
in representation used by previous associative
word learners and propose a functional representation
that not only denotes environmental
categories, but serves as the basis for activities
and interaction with the environment.
We present experimental results with an autonomous
agent acquiring language.</summary>
  <author>
    <name>Brendan Burns</name>
    <email/>
  </author>
  <author>
    <name>Charles Sutton</name>
    <email/>
  </author>
  <author>
    <name>Clayton Morrison</name>
    <email/>
  </author>
  <author>
    <name>Paul Cohen</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3339/Atom/cogprints-eprint-3339.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3339"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3339/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3339/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3339"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:25Z</updated>
  <id>http://cogprints.org/id/eprint/3339</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3339"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3339</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3339">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Interactivist approach to representation in epigenetic agents</title>
  <summary type="xhtml">Interactivism is a vast and rather ambitious philosophical
and theoretical system originally developed by Mark
Bickhard, which covers plethora of aspects related to
mind and person. Within interactivism, an agent is
regarded as an action system: an autonomous, self-organizing,
self-maintaining entity, which can exercise
actions and sense their effects in the environment it
inhabits. In this paper, we will argue that it is especially
suited for treatment of the problem of representation in
epigenetic agents. More precisely, we will elaborate on
process-based ontology for representations, and will
sketch a way of discussing about architectures for
epigenetic agents in a general manner.</summary>
  <author>
    <name>Georgi Stojanov</name>
    <email/>
  </author>
  <author>
    <name>Andrea Kulakov</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3252/Atom/cogprints-eprint-3252.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3252"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3252/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3252/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3252"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:23Z</updated>
  <id>http://cogprints.org/id/eprint/3252</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3252"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3252</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3252">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Introduction: The Third International Conference on Epigenetic Robotics</title>
  <summary type="xhtml">This paper summarizes the paper and poster contributions
to the Third International Workshop on
Epigenetic Robotics. The focus of this workshop is
on the cross-disciplinary interaction of developmental
psychology and robotics. Namely, the general
goal in this area is to create robotic models of the
psychological development of various behaviors. The
term "epigenetic" is used in much the same sense as
the term "developmental" and while we could call
our topic "developmental robotics", developmental
robotics can be seen as having a broader interdisciplinary
emphasis. Our focus in this workshop is
on the interaction of developmental psychology and
robotics and we use the phrase "epigenetic robotics"
to capture this focus.</summary>
  <author>
    <name>Luc Berthouze</name>
    <email/>
  </author>
  <author>
    <name>Christopher G. Prince</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3057/Atom/cogprints-eprint-3057.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3057"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3057/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3057/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3057"/>
  <published>2003-07-16Z</published>
  <updated>2011-03-11T08:55:18Z</updated>
  <id>http://cogprints.org/id/eprint/3057</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3057"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3057</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3057">
    <sword:depositedOn>2003-07-16Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Iterated learning and grounding: from holistic to compositional languages</title>
  <summary type="xhtml">This paper presents a new computational model for studying the origins and evolution of compositional languages grounded through the interaction between agents and their environment. The model is based on previous work on adaptive grounding of lexicons and the iterated learning model. Although the model is still in a developmental phase, the first results show that a compositional language can emerge in which the structure reflects regularities present in the population's environment.</summary>
  <author>
    <name>Paul Vogt</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3344/Atom/cogprints-eprint-3344.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3344"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3344/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3344/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3344"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:26Z</updated>
  <id>http://cogprints.org/id/eprint/3344</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3344"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3344</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3344">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Modeling Human Infant Learning in Embodied Artificial Entities to Produce Grounded Concepts</title>
  <summary type="xhtml">I present a system for concept development in an artificial
entity. The concept development is designed
around the foundations of human cognition while at
the same time remaining grounded in the agent or
robot’s own perception of its world.</summary>
  <author>
    <name>Eric Berkowitz</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3333/Atom/cogprints-eprint-3333.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3333"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3333/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3333/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3333"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:25Z</updated>
  <id>http://cogprints.org/id/eprint/3333</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3333"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3333</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3333">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Motivational principles for visual know-how development</title>
  <summary type="xhtml">What dynamics can enable a robot to
continuously develop new visual know-how?
We present a first experimental investigation
where an AIBO robot develops visual competences from scratch driven only by internal
motivations. The motivational principles used
by the robot are independent of any particular
task. As a consequence, they can constitute
the basis for a general approach to sensory-motor development.</summary>
  <author>
    <name>Frederic Kaplan</name>
    <email/>
  </author>
  <author>
    <name>Pierre-Yves Oudeyer</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3756/Atom/cogprints-eprint-3756.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3756"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3756/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3756/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3756"/>
  <published>2004-08-10Z</published>
  <updated>2011-03-11T08:55:39Z</updated>
  <id>http://cogprints.org/id/eprint/3756</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3756"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3756</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3756">
    <sword:depositedOn>2004-08-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Obstacle Avoidance and Proscriptive Bayesian Programming</title>
  <summary type="xhtml">Unexpected events and not modeled properties of the robot environment are some of
the challenges presented by situated robotics research field. Collision avoidance is a basic security
requirement and this paper proposes a probabilistic approach called Bayesian Programming, which
aims to deal with the uncertainty, imprecision and incompleteness of the information handled to
solve the obstacle avoidance problem. Some examples illustrate the process of embodying the
programmer preliminary knowledge into a Bayesian program and experimental results of these
examples implementation in an electrical vehicle are described and commented. A video illustration
of the developed experiments can be found at http://www.inrialpes.fr/sharp/pub/laplace</summary>
  <author>
    <name>C Koike</name>
    <email/>
  </author>
  <author>
    <name>C Pradalier</name>
    <email/>
  </author>
  <author>
    <name>P Bessiere</name>
    <email/>
  </author>
  <author>
    <name>E Mazer</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3347/Atom/cogprints-eprint-3347.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3347"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3347/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3347/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3347"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:26Z</updated>
  <id>http://cogprints.org/id/eprint/3347</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3347"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3347</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3347">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Perceptual Abstraction for Robotic Cognitive Development</title>
  <summary type="xhtml">We are concerned with the design of a developmental
robot that learns from scratch simple
models about itself and its surroundings.
A particular attention is given to perceptual
abstraction from high-dimensional sensors.</summary>
  <author>
    <name>Remi Driancourt</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3741/Atom/cogprints-eprint-3741.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3741"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3741/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3741/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3741"/>
  <published>2004-08-10Z</published>
  <updated>2011-03-11T08:55:39Z</updated>
  <id>http://cogprints.org/id/eprint/3741</id>
  <category term="techreport" label="Departmental Technical Report" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3741"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3741</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3741">
    <sword:depositedOn>2004-08-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Probabilistic Methodology and Techniques for Artefact Conception and Development</title>
  <summary type="xhtml">The purpose of this paper is to make a state of the art on probabilistic methodology and techniques for artefact conception and development. It is the 8th deliverable of the BIBA (Bayesian Inspired Brain and Artefacts) project. We first present the incompletness problem as the central difficulty that both living creatures and artefacts have to face: how can they perceive, infer, decide and act efficiently with incomplete and uncertain knowledge?. We then introduce a generic probabilistic formalism called Bayesian Programming. This formalism is then used to review the main probabilistic methodology
and techniques. This review is organized in 3 parts: first the probabilistic models from Bayesian networks to Kalman filters and from sensor fusion to CAD systems, second the inference techniques and finally the learning and model acquisition and comparison methodologies. We conclude with the perspectives of the BIBA project as they rise from this state of the art.</summary>
  <author>
    <name>Dr P Bessiere</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3757/Atom/cogprints-eprint-3757.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3757"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3757/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3757/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3757"/>
  <published>2004-08-10Z</published>
  <updated>2011-03-11T08:55:39Z</updated>
  <id>http://cogprints.org/id/eprint/3757</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3757"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3757</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3757">
    <sword:depositedOn>2004-08-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Proscriptive Bayesian Programming Application for Collision Avoidance</title>
  <summary type="xhtml">Evolve safely in an unchanged environment
and possibly following an optimal trajectory is one big
challenge presented by situated robotics research field. Collision
avoidance is a basic security requirement and this
paper proposes a solution based on a probabilistic approach
called Bayesian Programming. This approach aims to deal
with the uncertainty, imprecision and incompleteness of the
information handled. Some examples illustrate the process
of embodying the programmer preliminary knowledge into
a Bayesian program and experimental results of these examples
implementation in an electrical vehicle are described
and commented. Some videos illustrating these experiments
can be found at http://www-laplace.imag.fr.</summary>
  <author>
    <name>C Koike</name>
    <email/>
  </author>
  <author>
    <name>C Pradalier</name>
    <email/>
  </author>
  <author>
    <name>P Bessiere</name>
    <email/>
  </author>
  <author>
    <name>E Mazer</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3367/Atom/cogprints-eprint-3367.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3367"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3367/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3367/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3367"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:27Z</updated>
  <id>http://cogprints.org/id/eprint/3367</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3367"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3367</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3367">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Robots, language, and meaning</title>
  <summary type="xhtml">People use language to exchange ideas and influence the actions of others through shared conceptions
of word meanings, and through a shared understanding of how word meanings are combined. Under the
surface form of words lie complex networks of mental structures and processes that give rise to the richly
textured semantics of natural language. Machines, in contrast, are unable to use language in human-like
ways due to fundamental limitations of current computational approaches to semantic representation.
To address these limitations, and to serve as a catalyst for exploring alternative approaches to language
and meaning, we are developing conversational robots. The problem of endowing robots with language
highlights the impossibility of isolating language from other cognitive processes. Instead, we embrace a
holistic approach in which various non-linguistic elements of perception, action, and memory, provide
the foundations for grounding word meaning. I will review recent results in grounding language in
perception and action and sketch ongoing work for grounding a wider range of words including social
terms such as "I" and "my".</summary>
  <author>
    <name>Deb Roy</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3760/Atom/cogprints-eprint-3760.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3760"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3760/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3760/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3760"/>
  <published>2004-08-10Z</published>
  <updated>2011-03-11T08:55:40Z</updated>
  <id>http://cogprints.org/id/eprint/3760</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3760"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3760</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3760">
    <sword:depositedOn>2004-08-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Simulating Vocal Imitation in Infants, using a Growth Articulatory Model and Speech Robotics</title>
  <summary type="xhtml">In order to shed lights on the cognitive representations
likely to underlie early vocal imitation, we tried to simulate
Kuhl and Meltzoff's experiment (1996), using Bayesian
robotics and a statistical model of the vocal tract that had
been fitted to pre-babblers' actual vocalizations. It was
shown that audition is compulsory to account for infants'
early vocal imitation performance, inasmuch as the
simulation of purely visual imitation failed to reproduce
infants' score and pattern of imitation. Further, a small
number of vocalizations (less than 100!) appeared to be
enough for a learning process to provide scores at least as
high as those of pre-babblers. Thus, early vocal imitation
lies in the reach of a baby robot, with only a few
assumptions about learning and imitation.</summary>
  <author>
    <name>J Serkhane</name>
    <email/>
  </author>
  <author>
    <name>J-L Schwartz</name>
    <email/>
  </author>
  <author>
    <name>P Bessiere</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3340/Atom/cogprints-eprint-3340.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3340"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3340/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3340/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3340"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:25Z</updated>
  <id>http://cogprints.org/id/eprint/3340</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3340"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3340</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3340">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Sparse visual models for biologically inspired sensorimotor control</title>
  <summary type="xhtml">Given the importance of using resources efficiently in the competition for survival, it is reasonable to think that natural evolution has discovered efficient cortical coding strategies for representing natural visual information. Sparse representations have intrinsic advantages in terms of fault-tolerance and low-power consumption potential, and can therefore be attractive for robot sensorimotor control with powerful dispositions for decision-making. Inspired by the mammalian brain and its visual ventral pathway, we present in this paper a hierarchical sparse coding network architecture that extracts visual features for use in sensorimotor control. Testing with natural images demonstrates that this sparse coding facilitates processing and learning in subsequent layers. Previous studies have shown how the responses of complex cells could be sparsely represented by a higher-order neural layer. Here we extend sparse coding in each network layer, showing that detailed modeling of earlier stages in the visual pathway enhances the characteristics of the receptive fields developed in subsequent stages. The yield network is more dynamic with richer and more biologically plausible input and output representation.</summary>
  <author>
    <name>Li Yang</name>
    <email/>
  </author>
  <author>
    <name>Marwan Jabri</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3755/Atom/cogprints-eprint-3755.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3755"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3755/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3755/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3755"/>
  <published>2004-08-10Z</published>
  <updated>2011-03-11T08:55:39Z</updated>
  <id>http://cogprints.org/id/eprint/3755</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3755"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3755</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3755">
    <sword:depositedOn>2004-08-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">A Survey of Probabilistic Models Using the Bayesian Programming Methodology as a Unifying Framework</title>
  <summary type="xhtml">This paper presents a survey of the most common
probabilistic models for artefact conception. We use
a generic formalism called Bayesian Programming,
which we introduce briefly, for reviewing the main
probabilistic models found in the literature. Indeed,
we show that Bayesian Networks, Markov Localization,
Kalman filters, etc., can all be captured under this single
formalism. We believe it oers the novice reader a
good introduction to these models, while still providing
the experienced reader an enriching global view of the
field.</summary>
  <author>
    <name>J Diard</name>
    <email/>
  </author>
  <author>
    <name>P Bessiere</name>
    <email/>
  </author>
  <author>
    <name>E Mazer</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3018/Atom/cogprints-eprint-3018.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3018"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3018/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3018/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3018"/>
  <published>2003-06-19Z</published>
  <updated>2011-03-11T08:55:18Z</updated>
  <id>http://cogprints.org/id/eprint/3018</id>
  <category term="bookchapter" label="Book Chapter" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3018"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3018</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3018">
    <sword:depositedOn>2003-06-19Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Symbol-Grounding Problem</title>
  <summary type="xhtml">The Symbol Grounding Problem is related to the problem of how words get their meanings, and of what meanings are. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful.</summary>
  <author>
    <name>Stevan Harnad</name>
    <email>63</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3058/Atom/cogprints-eprint-3058.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3058"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3058/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3058/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3058"/>
  <published>2003-07-16Z</published>
  <updated>2011-03-11T08:55:18Z</updated>
  <id>http://cogprints.org/id/eprint/3058</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3058"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3058</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3058">
    <sword:depositedOn>2003-07-16Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">THSim v3.2: The Talking Heads simulation tool</title>
  <summary type="xhtml">The field of language evolution and computation may benefit from using efficient and robust simulation tools that are based on widely exploited principles within the field. The tool presented in this paper is one that could fulfil such needs. The paper presents an overview of the tool -- THSim v3.2 -- and discusses some research questions that can be investigated with it.</summary>
  <author>
    <name>Paul Vogt</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3349/Atom/cogprints-eprint-3349.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3349"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3349/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3349/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3349"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:26Z</updated>
  <id>http://cogprints.org/id/eprint/3349</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3349"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3349</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3349">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Towards Learning Affective Body Gesture</title>
  <summary type="xhtml">Robots are assuming an increasingly important role in our society. They now become pets and help support children healing. In other words, they are now trying to entertain an active and affective communication with human agents. However, up to now, such systems have primarily relied on the human agents' ability to empathize with the system. Changes in the behavior of the system could therefore reult in changes of mood or behavior in the human partner. This paper describes experiments we carried out to study the importance of body language in affective communication. The results of the experiments led us to develop a system that can incrementally learn to recognize affective messages conveyed by body postures.</summary>
  <author>
    <name>Andrea Kleinsmith</name>
    <email/>
  </author>
  <author>
    <name>Nadia Bianchi-Berthouze</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3351/Atom/cogprints-eprint-3351.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3351"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3351/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3351/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3351"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:26Z</updated>
  <id>http://cogprints.org/id/eprint/3351</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3351"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3351</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3351">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">A Unified Model For Developmental Robotics</title>
  <summary type="xhtml">We present the architecture and distributed
algorithms of an implemented system called
NeuSter, that unifies learning, perception and action
for autonomous robot control. NeuSter comprises
several sub-systems that provide online
learning for networks of million neurons on machine
clusters. It extracts information from sensors,
builds its own representations of the environment
in order to learn non-predefined goals.</summary>
  <author>
    <name>Williams Paquier</name>
    <email/>
  </author>
  <author>
    <name>Nicolas Do Huu</name>
    <email/>
  </author>
  <author>
    <name>Raja Chatila</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3753/Atom/cogprints-eprint-3753.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3753"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3753/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3753/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3753"/>
  <published>2004-08-10Z</published>
  <updated>2011-03-11T08:55:39Z</updated>
  <id>http://cogprints.org/id/eprint/3753</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3753"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3753</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3753">
    <sword:depositedOn>2004-08-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Using Bayesian Programming for Multisensor Multi-Target Tracking in Automative Applications</title>
  <summary type="xhtml">A prerequisite to the design of future Advanced Driver Assistance Systems for cars is a sensing system providing all the information required for high-level driving assistance tasks. Carsense is a European project whose purpose is to develop such a new sensing system. It will combine different sensors (laser, radar and video) and will rely on the fusion of the information coming from these sensors in order to achieve better accuracy, robustness and an increase of the information content. This paper demonstrates the interest of using
probabilistic reasoning techniques to address this challenging multi-sensor data fusion problem. The approach used is called Bayesian Programming. It is a general approach based on an implementation of the Bayesian theory. It was introduced rst to design robot control programs but its scope of application is much broader and it can be used whenever one has to deal with problems involving uncertain or incomplete knowledge.</summary>
  <author>
    <name>C Coue</name>
    <email/>
  </author>
  <author>
    <name>T Fraichard</name>
    <email/>
  </author>
  <author>
    <name>P Bessiere</name>
    <email/>
  </author>
  <author>
    <name>E Mazer</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3353/Atom/cogprints-eprint-3353.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3353"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3353/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3353/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3353"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:26Z</updated>
  <id>http://cogprints.org/id/eprint/3353</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3353"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3353</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3353">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Visual binding, reentry, and neuronal synchrony in
a physically situated brain-based device</title>
  <summary type="xhtml">By constructing and analyzing a physically
situated brain-based device (i.e. a device
with sensors and actuators whose behavior
is guided by a simulated nervous system),
we show that reentrant connectivity and dynamic
synchronization can provide an effective
mechanism for binding the visual features
of objects.</summary>
  <author>
    <name>Anil K. Seth</name>
    <email/>
  </author>
  <author>
    <name>Jeffrey L. McKinstry</name>
    <email/>
  </author>
  <author>
    <name>Gerald M. Edelman</name>
    <email/>
  </author>
  <author>
    <name>Jeffrey L. Krichmar</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3337/Atom/cogprints-eprint-3337.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3337"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3337/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3337/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3337"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:25Z</updated>
  <id>http://cogprints.org/id/eprint/3337</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3337"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3337</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3337">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Visual Perception of Humanoid Movement</title>
  <summary type="xhtml">We examined similarity judgements of arm movements generated by different control strategies with the goal of producing natural looking movements on humanoid robots and virtual humans. We examined a variety of movements generated by human motion capture data as well as fourteen differenct synthetic motion generation algorithms that were developed based on human motor production theories and computational considerations. In experiments we displayed motion clips generated by these 15 different methods on both a humanoid robot and a computer graphic character and obtained judgements of similarity between pairs of movements. Experimental results reveal that for movements with obviously different paths as occurred with two production techniques then, as expected, hand paths dominated in the perception of similarity. However, for roughly similar paths as occurred for the other techniques then judgements about fast movements appeared to be based on their velocity profile while judgements to slow movements were based on more detailed representation of the movement.</summary>
  <author>
    <name>Frank E. Pollick</name>
    <email/>
  </author>
  <author>
    <name>Joshua G. Hale</name>
    <email/>
  </author>
  <author>
    <name>Phil McAleer</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3365/Atom/cogprints-eprint-3365.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3365"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3365/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3365/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3365"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:27Z</updated>
  <id>http://cogprints.org/id/eprint/3365</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3365"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3365</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3365">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">What should a robot learn from an infant?
Mechanisms of action interpretation and
observational learning in infancy</title>
  <summary type="xhtml">The paper provides a summary of our
recent research on preverbal infants (using
violation-of-expectation and observational
learning paradigms) demonstrating that one-year-olds interpret and draw systematic
inferences about other’s goal-directed actions,
and can rely on such inferences when imitating
other’s actions or emulating their goals. To
account for these findings it is proposed that one-year-olds apply a non-mentalistic action
interpretational system, the ’teleological stance’
that represents actions by relating relevant
aspects of reality (action, goal-state, and
situational constraints) through the principle of
rational action, which assumes that actions
function to realize goal-states by the most
efficient means available in the actor’s situation.
The relevance of these research findings and the
proposed theoretical model for how to realize the
goal of epigenetic robotics of building a ’socially
relevant’ humanoid robot is discussed.</summary>
  <author>
    <name>György Gergely</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3329/Atom/cogprints-eprint-3329.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3329"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3329/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3329/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3329"/>
  <published>2004-02-12Z</published>
  <updated>2011-03-11T08:55:25Z</updated>
  <id>http://cogprints.org/id/eprint/3329</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3329"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3329</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3329">
    <sword:depositedOn>2004-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The Whole World in Your Hand: Active and Interactive Segmentation</title>
  <summary type="xhtml">Object segmentation is a fundamental problem
in computer vision and a powerful resource for
development. This paper presents three embodied approaches to the visual segmentation of objects. Each approach to segmentation is aided
by the presence of a hand or arm in the proximity of the object to be segmented. The first
approach is suitable for a robotic system, where
the robot can use its arm to evoke object motion. The second method operates on a wearable system, viewing the world from a human's
perspective, with instrumentation to help detect
and segment objects that are held in the wearer's
hand. The third method operates when observing
a human teacher, locating periodic motion (finger/arm/object waving or tapping) and using it
as a seed for segmentation. We show that object segmentation can serve as a key resource for
development by demonstrating methods that exploit high-quality object segmentations to develop
both low-level vision capabilities (specialized feature detectors) and high-level vision capabilities
(object recognition and localization).</summary>
  <author>
    <name>Artur Arsenio</name>
    <email/>
  </author>
  <author>
    <name>Paul Fitzpatrick</name>
    <email/>
  </author>
  <author>
    <name>Charles C. Kemp</name>
    <email/>
  </author>
  <author>
    <name>Giorgio Metta</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2663/Atom/cogprints-eprint-2663.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2663"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2663/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2663/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2663"/>
  <published>2002-12-17Z</published>
  <updated>2011-03-11T08:55:07Z</updated>
  <id>http://cogprints.org/id/eprint/2663</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2663"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2663</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2663">
    <sword:depositedOn>2002-12-17Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Adaptivity through alternate freeing and freezing of degrees of freedom</title>
  <summary type="xhtml">Starting with fewer degrees of freedom has been shown to enable a more efficient exploration of the sensorimotor space. While not necessarily leading to optimal task performance, it results in a smaller number of directions of stability, which guide the coordination of additional degrees of freedom. The developmental release of additional degrees of freedom is then expected to allow for optimal task performance and more tolerance and adaptation to environmental interaction. In this paper, we test this assumption with a small-sized humanoid robot that learns to swing under environmental perturbations. Our experiments show that a progressive release of degrees of freedom alone is not sufficient to cope with environmental perturbations. Instead, alternate freezing and freeing of the degrees of freedom is required. Such finding is consistent with observations made during transitional periods in acquisition of skills in infants.
</summary>
  <author>
    <name>Max Lungarella</name>
    <email></email>
  </author>
  <author>
    <name>Dr Luc Berthouze</name>
    <email>1463</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2516/Atom/cogprints-eprint-2516.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2516"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2516/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2516/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2516"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:03Z</updated>
  <id>http://cogprints.org/id/eprint/2516</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2516"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2516</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2516">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Adaptivity through Physical Immaturity</title>
  <summary type="xhtml">Given a neural control structure, what would be the impact of body growth on control performance? This question, which addresses the issue of the interaction between innate structure, ongoing developing structure and experience, is very relevant to the field of epigenetic robotics. Much of the early social interaction is done as the body develops and the interplay cannot be ignored. We hypothesize that starting with fewer degrees of freedom enables a more efficient exploration of the sensorimotor space, that results in multiple directions of stability. While not necessarily corresponding to optimal task performance, they will guide the coordination of additional degrees of freedom. These additional degrees of freedom then allow for optimal task performance as well as for more tolerance and adaptation to environmental interaction. We propose a simple case-study to validate our hypothesis and describe experiments with a small humanoid robot.</summary>
  <author>
    <name>Max Lungarella</name>
    <email/>
  </author>
  <author>
    <name>Luc Berthouze</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2528/Atom/cogprints-eprint-2528.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2528"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2528/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2528/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2528"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:04Z</updated>
  <id>http://cogprints.org/id/eprint/2528</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2528"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2528</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2528">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Behavior-Based Early Language Development on a Humanoid Robot</title>
  <summary type="xhtml">We are exploring the idea that early language acquisition could be better modelled on an artifcial creature by considering the pragmatic aspect of natural language and of its development in human infants. We have implemented a system of vocal behaviors on Kismet in which "words" or concepts are behaviors in a competitive hierarchy. This paper reports on the framework, the vocal system's architecture and algorithms, and some preliminary results from vocal label learning and concept formation.</summary>
  <author>
    <name>Paulina Varshavskaya</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2520/Atom/cogprints-eprint-2520.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2520"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2520/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2520/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2520"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:04Z</updated>
  <id>http://cogprints.org/id/eprint/2520</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2520"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2520</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2520">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Better Vision Through Manipulation</title>
  <summary type="xhtml">For the purposes of manipulation, we would like to know what parts of the environment are physically coherent ensembles - that is, which parts will move together, and which are more or less independent. It takes a great deal of experience before this judgement can be made from purely visual information. This paper develops active strategies for acquiring that experience through experimental manipulation, using tight correlations between arm motion and optic flow to detect both the arm itself and the boundaries of objects with which it comes into contact. We argue that following causal chains of events out from the robot's body into the environment allows for a very natural developmental progression of visual competence, and relate this idea to results in neuroscience.</summary>
  <author>
    <name>Giorgio Metta</name>
    <email/>
  </author>
  <author>
    <name>Paul Fitzpatrick</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3253/Atom/cogprints-eprint-3253.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3253"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3253/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3253/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3253"/>
  <published>2003-10-29Z</published>
  <updated>2011-03-11T08:55:23Z</updated>
  <id>http://cogprints.org/id/eprint/3253</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3253"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3253</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3253">
    <sword:depositedOn>2003-10-29Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Body Scheme Acquisition by Cross Modal Map Learning among Tactile, Visual, and Proprioceptive Spaces</title>
  <summary type="xhtml">How to represent own body is one of the most interesting issues in cognitive developmental robotics which aims to understand the cognitive developmental processes that an intelligent robot would require and how to realize them in a physical entity.  This paper presents a cognitive model how the robot acquires its own body representation, that is body scheme for the body surface.  The internal observer assumption makes it difficult for a robot to associate the sensory information from different modalities because of the lacking of references between them that are usually given by the designer in the prenatal stage of the robot.  Our model is based on cross-modal map learning among join, vision, and tactile sensor spaces by associating different pairs of sensor values when they are activated simultaneously.  We show a preliminary experiment, and then discuss how our model can explain the reported phenomenon on body scheme and future issues.</summary>
  <author>
    <name>Yuichiro Yoshikawa</name>
    <email/>
  </author>
  <author>
    <name>Hiroyoshi Kawanishi</name>
    <email/>
  </author>
  <author>
    <name>Minoru Asada</name>
    <email/>
  </author>
  <author>
    <name>Koh Hosoda</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2619/Atom/cogprints-eprint-2619.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2619"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2619/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2619/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2619"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:06Z</updated>
  <id>http://cogprints.org/id/eprint/2619</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2619"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2619</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2619">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Can a Robot Hear Music?  Can a Robot Dance?  Can a Robot Tell What it Knows or Intendes to Do?  Can it Feel Pride or Shame in Company?  -- Questions of the Nature of Human Vitality</title>
  <summary type="xhtml">None</summary>
  <author>
    <name>Colwyn Trevarthen</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2530/Atom/cogprints-eprint-2530.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2530"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2530/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2530/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2530"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:04Z</updated>
  <id>http://cogprints.org/id/eprint/2530</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2530"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2530</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2530">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Developmental Robots - A New Paradigm</title>
  <summary type="xhtml">It has been proved to be extremely challenging for humans to program a robot to such a sufficient degree that it acts properly in a typical unknown human environment. This is especially true for a humanoid robot due to the very large number of redundant degrees of freedom and a large number of sensors that are required for a humanoid to work safely and effectively in the human environment. How can we address this fundamental problem? Motivated by human mental development from infancy to adulthood, we present a theory, an architecture, and some experimental results showing how to enable a robot to develop its mind automatically, through online, real time interactions with its environment. Humans mentally “raise” the robot through “robot sitting” and “robot schools” instead of task-specific robot programming.</summary>
  <author>
    <name>Juyang Weng</name>
    <email/>
  </author>
  <author>
    <name>Yilu Zhang</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2618/Atom/cogprints-eprint-2618.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2618"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2618/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2618/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2618"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:06Z</updated>
  <id>http://cogprints.org/id/eprint/2618</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2618"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2618</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2618">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Emergence of imitation mediated by objects</title>
  <summary type="xhtml">This paper describes our model of the emergence of a mirror system for imitative learning. The mirror system is an intermodal mapping between someone's action (as seen) and one's own action (to execute). It is widely assumed that the mapping is a geometric transformation between bodies in the visual and motor spaces; however, it is still unclear if the mapping is innate or, if not, how it is acquired. We claim here that the mapping is not a geometric transformation between bodies but a functional correspondence between someone's action on an object for producing an effect and one's own action on the same or similar object for producing a similar effect, which is learnable because both tend to utilize the object's affordance in a similar way.</summary>
  <author>
    <name>Hideki Kozima</name>
    <email/>
  </author>
  <author>
    <name>Cocoro Nakagawa</name>
    <email/>
  </author>
  <author>
    <name>Hiroyuki Yano</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2500/Atom/cogprints-eprint-2500.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2500"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2500/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2500/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2500"/>
  <published>2003-09-26Z</published>
  <updated>2011-03-11T08:55:03Z</updated>
  <id>http://cogprints.org/id/eprint/2500</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2500"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2500</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2500">
    <sword:depositedOn>2003-09-26Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">From Visuo-Motor Development to Low-level Imitation</title>
  <summary type="xhtml">We present the first stages of the developmental course of a robot using vision and a 5 degree of freedom robotic arm. During an exploratory behavior, the robot learns visuo-motor control of its mechanical arm. We show how a simple neural network architecture, combining elementary vision, a self-organized algorithm, and dynamical Neural Fields is able to learn and use proper associations between vision and arm movements, even if the problem is ill posed (2-D toward 3-D mapping and also mechanical redundancy between different joints). Highlighting the generic aspect of such an architecture, we show as a robotic result that it is used as a basis for simple gestural imitations of humans. Finally we show how the imitative mechanism carries on the developmental course, allowing the acquisition of more and more complex behavioral capabilities.</summary>
  <author>
    <name>Pierre Andry</name>
    <email/>
  </author>
  <author>
    <name>Philippe Gaussier</name>
    <email/>
  </author>
  <author>
    <name>Jacqueline Nadel</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2519/Atom/cogprints-eprint-2519.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2519"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2519/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2519/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2519"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:03Z</updated>
  <id>http://cogprints.org/id/eprint/2519</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2519"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2519</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2519">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Generation of Whole-Body Expressive Movement Based on Somatical Theories</title>
  <summary type="xhtml">An automatic choreography method to generate lifelike body movements is proposed. This method is based on somatics theories that are conventionally used to evaluate human’s psychological and developmental states by analyzing the body movement. The idea of this paper is to use the theories in the inverse way: to facilitate generation of artificial body movements that are plausible regarding evolutionary, developmental and emotional states of robots or other non-living movers. This paper reviews somatic theories and describes a strategy for implementations of automatic body movement generation. In addition, a psychological experiment is reported to verify expression ability on body movement rhythm. This method facilitates to choreographing body movement of humanoids, animal-shaped robots, and computer graphics characters in video games.</summary>
  <author>
    <name>Toru Nakata</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2534/Atom/cogprints-eprint-2534.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2534"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2534/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2534/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2534"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:04Z</updated>
  <id>http://cogprints.org/id/eprint/2534</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2534"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2534</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2534">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Global Dynamics: a new concept for design of dynamical behavior</title>
  <summary type="xhtml">The global dynamics, a novel concept for design of human/humanoid behavior is proposed. The principle of this concept is to exploit the body dynamics and apply control input only where it is necessary.
Within the phase space of the body dynamics, there are many stable and unstable mani-folds coexist. Then if we analysed its structure and obtained a map in sufficient resolution, it may be possible to realise a motion by exploiting stable regions for reducing control input and unstable regions for switching between stable regions.
Also, we expect an emergence of symbols within the dynamics, as the series of points where control input should be adopted. This feature realises higher level description and makes adaptation behavior easier. We are studying from two aspects, the motion capture experiment and dynamical simulation of simple elastic robot. The former supports that above assumption and the latter supports the exploiting the dynamical stability is useful.</summary>
  <author>
    <name>Tomoyuki Yamamoto</name>
    <email/>
  </author>
  <author>
    <name>Yasuo Kuniyoshi</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2506/Atom/cogprints-eprint-2506.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2506"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2506/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2506/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2506"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:03Z</updated>
  <id>http://cogprints.org/id/eprint/2506</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2506"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2506</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2506">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Humanoid Motion Description Language</title>
  <summary type="xhtml">In this paper we propose a description language for specifying motions for humanoid robots and for allowing humanoid robots to acquire motor skills. Locomotion greatly increases our ability to interact with our environments, which in turn increases our mental abilities. This principle also applies to humanoid robots. However, there are great difficulties to specify humanoid motions and to represent motor skills, which in most cases require four-dimensional space representations. We propose a representation framework that includes the following attributes: motion description layers, egocentric reference system, progressive quantized refinement, and automatic constraint satisfaction. We also outline strategies for acquiring new motor skills by learning from trial and error, macro approach, and programming. Then, we outline the development of a new humanoid motion description language called Cybele.</summary>
  <author>
    <name>Ben Choi</name>
    <email/>
  </author>
  <author>
    <name>Yanbing Chen</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2331/Atom/cogprints-eprint-2331.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2331"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2331/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2331/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2331"/>
  <published>2002-07-18Z</published>
  <updated>2011-03-11T08:54:57Z</updated>
  <id>http://cogprints.org/id/eprint/2331</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2331"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2331</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2331">
    <sword:depositedOn>2002-07-18Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">An improved 2D optical flow sensor for motion segmentation</title>
  <summary type="xhtml"> A functional focal-plane implementation of a 2D optical flow system is presented that detects an
  preserves motion discontinuities. The system is composed of two different network layers of analog
  computational units arranged in a retinotopical order. The units in the first layer (the optical
  flow network) estimate the local optical flow field in two visual dimensions, where the strength
  of their nearest-neighbor connections determines the amount of motion integration. Whereas in an
  earlier implementation \cite{Stocker_Douglas99} the connection strength was set constant in the
  complete image space, it is now \emph{dynamically and locally} controlled by the second network
  layer (the motion discontinuities network) that is recurrently connected to the optical flow
  network.  The connection strengths in the optical flow network are modulated such that visual
  motion integration is ideally only facilitated within image areas that are likely to represent
  common
  motion sources.
  Results of an experimental aVLSI chip illustrate the potential of the approach and its
  functionality under real-world conditions.</summary>
  <author>
    <name>alan stocker</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2524/Atom/cogprints-eprint-2524.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2524"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2524/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2524/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2524"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:04Z</updated>
  <id>http://cogprints.org/id/eprint/2524</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2524"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2524</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2524">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Looking for a suitable strategy for each problem - Multiple tasks approach to navigation learning task</title>
  <summary type="xhtml">We suppose the functional parts combination (FPC) model, whereby a problem solving strategy is acquired depending on the tasks given. The model is based on the neuroscientific fact that each cerebral cortical area has a different role and is selectively activated depending on the task. FPC model is a meta learning model that consists of a set of functional parts and a sequence of control signals that specifies their combination. The functional parts are combined depending on the situation, to realize a processing circuit required for the situation. We use genetic algorithm for searching the control signals. We examine the model by evaluating the difference in acquired behavior of (1) two agents with different functional parts working on the same navigational task and (2) two agents with the same functional parts working on different tasks. We show that the agent using FPC model acquires learning strategies suitable for the given problems.</summary>
  <author>
    <name>Akitoshi Ogawa</name>
    <email/>
  </author>
  <author>
    <name>Takashi Omori</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2507/Atom/cogprints-eprint-2507.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2507"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2507/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2507/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2507"/>
  <published>2003-10-29Z</published>
  <updated>2011-03-11T08:55:03Z</updated>
  <id>http://cogprints.org/id/eprint/2507</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2507"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2507</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2507">
    <sword:depositedOn>2003-10-29Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Looking like a human: How conversation analytic work on gaze direction in human interaction can be relevant for design and analysis of robotic interaction.</title>
  <summary type="xhtml">A crucial aspect of the development of language in children has been concerned with pragmatics - a field which explores the ways in which interaction is successfully accomplished. One aspect of this is concerned with the sequential implicativeness of our actions - that is what do particular behaviours accomplish given the specific turn by turn interactive sequence in which they occur. This paper seeks to consider some aspects of this by reference to conversation analytic work on gaze in adult interaction. In this way the paper attempts to provide a brief overview of some of the ways that our thinking about robot - human interaction can be deepened by an appreciation of conversation analytic work. In particular it argues that the empirical basis of conversation analysis (henceforth CA) offers a wonderful treasure-trove of understandings about how humans accomplish social interaction. The understanding that CA provides is derived from careful empirical scrutiny and therefore it is able to offer a perspective on interaction that is sensitive to minute detail rather than crude applications of global concepts. Thus this paper provides a provisional inspection of a small fraction of CA literature concerning the use of gaze in interaction and thinks through the relevance that this might have for the design and understanding of interacting robots. Whilst CA provides a complex understanding of human interaction, predominantly derived from the everyday talk of adults, this paper argues that the approach can provide both an idealised target of communication competence and perhaps more important a means of understanding instances of human-robot interaction. In this way CA may usefully supplement other approaches to communicative competence in work on interacting robots.</summary>
  <author>
    <name>Paul Dickerson</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2511/Atom/cogprints-eprint-2511.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2511"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2511/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2511/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2511"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:03Z</updated>
  <id>http://cogprints.org/id/eprint/2511</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2511"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2511</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2511">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Novelty and Reinforcement Learning in the Value System of Developmental Robots</title>
  <summary type="xhtml">The value system of a developmental robot signals the occurrence of salient sensory inputs, modulates the mapping from sensory inputs to action outputs, and evaluates candidate actions. In the work reported here, a low level value system is modeled and implemented. It simulates the non-associative animal learning mechanism known as habituation effect. Reinforcement learning is also integrated with novelty. Experimental results show that the proposed value system works as designed in a study of robot viewing angle selection.</summary>
  <author>
    <name>Xiao Huang</name>
    <email/>
  </author>
  <author>
    <name>John Weng</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3055/Atom/cogprints-eprint-3055.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3055"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3055/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3055/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3055"/>
  <published>2003-07-16Z</published>
  <updated>2011-03-11T08:55:18Z</updated>
  <id>http://cogprints.org/id/eprint/3055</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3055"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3055</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3055">
    <sword:depositedOn>2003-07-16Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The physical symbol grounding problem</title>
  <summary type="xhtml">This paper presents an approach to solve the symbol grounding problem within the framework of embodied cognitive science. It will be argued that symbolic structures can be used within the paradigm of embodied cognitive science by adopting an alternative definition of a symbol. In this alternative definition, the symbol may be viewed as a structural coupling between an agent's sensorimotor activations and its environment. A robotic experiment is presented in which mobile robots develop a symbolic structure from scratch by engaging in a series of language games. In this experiment it is shown that robots can develop a symbolic structure with which they can communicate the names of a few objects with a remarkable degree of success. It is further shown that, although the referents may be interpreted differently on different occasions, the objects are usually named with only one form.</summary>
  <author>
    <name>Paul Vogt</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2521/Atom/cogprints-eprint-2521.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2521"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2521/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2521/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2521"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:04Z</updated>
  <id>http://cogprints.org/id/eprint/2521</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2521"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2521</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2521">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Physically Embedded Genetic Algorithm Learning in Multi-Robot Scenarios: The PEGA algorithm</title>
  <summary type="xhtml">We present experiments in which a group of autonomous mobile robots learn to perform fundamental sensor-motor tasks through a collaborative learning process. Behavioural strategies, i.e. motor responses to sensory stimuli, are encoded by means of genetic strings stored on the individual robots, and adapted through a genetic algorithm (Mitchell, 1998) executed by the entire robot collective: robots communicate their own strings and corresponding fitness to each other, and then execute a genetic algorithm to improve their individual behavioural strategy.
The robots acquired three different sensormotor competences, as well as the ability to select one of two, or one of three behaviours depending on context ("behaviour management"). Results show that fitness indeed increases with increasing learning time, and the analysis of the acquired behavioural strategies demonstrates that they are effective in accomplishing the desired task.</summary>
  <author>
    <name>Ulrich Nehmzow</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2393/Atom/cogprints-eprint-2393.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2393"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2393/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2393/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2393"/>
  <published>2002-08-09Z</published>
  <updated>2011-03-11T08:54:58Z</updated>
  <id>http://cogprints.org/id/eprint/2393</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2393"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2393</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2393">
    <sword:depositedOn>2002-08-09Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Probabilistic Search for Object Segmentation and Recognition</title>
  <summary type="xhtml">The problem of searching for a model-based scene interpretation is analyzed within a probabilistic framework. Object models are formulated as generative models for range data of the scene. A new statistical criterion, the truncated object probability, is introduced to infer an optimal sequence of object hypotheses to be evaluated for their match to the data. The truncated probability is partly determined by prior knowledge of the objects and partly learned from data. Some experiments on sequence quality and object segmentation and recognition from stereo data are presented. The article recovers classic concepts from object recognition (grouping, geometric hashing, alignment) from the probabilistic perspective and adds insight into the optimal ordering of object hypotheses for evaluation. Moreover, it introduces point-relation densities, a key component of the truncated probability, as statistical models of local surface shape.</summary>
  <author>
    <name>Dr. Ulrich Hillenbrand</name>
    <email/>
  </author>
  <author>
    <name>Prof. Dr. Gerd Hirzinger</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2517/Atom/cogprints-eprint-2517.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2517"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2517/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2517/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2517"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:03Z</updated>
  <id>http://cogprints.org/id/eprint/2517</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2517"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2517</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2517">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Social Situatedness:  Vygotsky and Beyond</title>
  <summary type="xhtml">The concept of ‘social situatedness’, i.e. the idea that the development of individual intelligence requires a social (and cultural) embedding, has recently received much attention in cognitive science and artificial intelligence research. The work of Lev Vygotsky who put forward this view already in the 1920s has influenced the discussion to some degree, but still remains far from well known. This paper therefore aims to give an overview of his cognitive development theory and discuss its relation to more recent work in primatology and socially situated artificial intelligence, in particular humanoid robotics.</summary>
  <author>
    <name>Jessica Lindblom</name>
    <email/>
  </author>
  <author>
    <name>Tom Ziemke</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2596/Atom/cogprints-eprint-2596.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2596"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2596/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2596/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2596"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:06Z</updated>
  <id>http://cogprints.org/id/eprint/2596</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2596"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2596</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2596">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Structures, inner values, hierarchies and stages: essentials for developmental robot architectures</title>
  <summary type="xhtml">In this paper we try to locate the essential components needed for a developmental robot architecture. We take the vocabulary and the main concepts from Piaget’s genetic epistemology and Vygotsky’s activity theory. After proposing an outline for a general developmental architecture, we describe the architectures that we have been developing in the recent years - Petitagé and Vygovorotsky. According to this outline, various contemporary works in autonomous agents can be classified, in an attempt to get a glimpse into the big picture and make the advances and open problems visible.</summary>
  <author>
    <name>Andrea Kulakov</name>
    <email/>
  </author>
  <author>
    <name>Georgi Stojanov</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2518/Atom/cogprints-eprint-2518.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2518"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2518/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2518/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2518"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:03Z</updated>
  <id>http://cogprints.org/id/eprint/2518</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2518"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2518</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2518">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Towards a Mirror System for the Development of Socially-Mediated Skills</title>
  <summary type="xhtml">We present a system that attempts to model the functional role of mirror neurons, namely the activation of structures in response to both the observation of a demonstrated task, and its generation. Through social situatedness and a set of innate skills, perceptual and motor structures develop for recognition and reproduction of demonstrated actions. We believe this is an implementation towards a mirror system, and we test it on two platforms, one in simulation involving imitation of object interactions, the second on a physical robot learning from a human to follow walls.</summary>
  <author>
    <name>Yuval Marom</name>
    <email/>
  </author>
  <author>
    <name>George Maistros</name>
    <email/>
  </author>
  <author>
    <name>Gillian Hayes</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2222/Atom/cogprints-eprint-2222.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2222"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2222/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2222/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2222"/>
  <published>2002-05-23Z</published>
  <updated>2011-03-11T08:54:55Z</updated>
  <id>http://cogprints.org/id/eprint/2222</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2222"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2222</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2222">
    <sword:depositedOn>2002-05-23Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Towards a Theory Grounded Theory of Language</title>
  <summary type="xhtml">In this paper, we build upon the idea of theory grounding and propose one specific form of theory grounding, a theory of language. Theory grounding is the idea that we can imbue our embodied artificially intelligent systems with theories by modeling the way humans, and specifically young children, develop skills with theories. Modeling theory development promises to increase the conceptual and behavioral flexibility of these systems. An example of theory development in children is the social understanding referred to as theory of mind. Language is a natural task for theory grounding because it is vital in symbolic skills and apparently necessary in developing theories. Word learning, and specifically developing a concept of words, is proposed as the first step in a theory grounded theory of language.</summary>
  <author>
    <name>Christopher G. Prince</name>
    <email/>
  </author>
  <author>
    <name>Eric J. Mislivec</name>
    <email/>
  </author>
  <author>
    <name>Oleksandr V. Kosolapov</name>
    <email/>
  </author>
  <author>
    <name>Troy R. Lykken</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2529/Atom/cogprints-eprint-2529.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2529"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2529/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2529/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2529"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:04Z</updated>
  <id>http://cogprints.org/id/eprint/2529</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2529"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2529</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2529">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Unsupervised navigation using an economy principle</title>
  <summary type="xhtml">We describe robot navigation learning based on self-selection of privileged vectors through the environment in accordance with an in built economy metric. This provides the opportunity both for progressive behavioural adaptation, and adaptive derivations, leading, through situated activity, to “representations" of the environment which are both economically attained and inherently meaningful to the agent.</summary>
  <author>
    <name>Lawrence Warnett</name>
    <email/>
  </author>
  <author>
    <name>Brendan McGonigle</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2505/Atom/cogprints-eprint-2505.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2505"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2505/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2505/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2505"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:03Z</updated>
  <id>http://cogprints.org/id/eprint/2505</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2505"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2505</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2505">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Volitron: On a Psychodynamic Robot and Its Four Realities</title>
  <summary type="xhtml">This paper discusses the concept of Volitron - a controller to make its host robot increase its competence in such activities as self-initiated exploration of an environment, new goal acquisition, and planning/executing of actions while taking into account predicted behaviors of objects of interest. There are four key elements in Volitron's structure: a model of perceived reality, a model of desired reality, a model of ideal reality and a model of anticipated reality. The task of a robot's working memory includes producing images of the robot itself imitating another subject's activities and sending the images to a model of desired reality. A tension (a concept borrowed from  psychoanalysis) arising from the differences between a perceived reality and a desired reality is a source of a motivation toward action. The final decision to take an action is based on a comparison of the model of anticipated reality with that of ideal reality. The interaction of Volitron's elements are described in the paper. Furthermore, a computational model of working memory (WM) and its psychological justification are provided.</summary>
  <author>
    <name>Andrzej Buller</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2533/Atom/cogprints-eprint-2533.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2533"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2533/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2533/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2533"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:04Z</updated>
  <id>http://cogprints.org/id/eprint/2533</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2533"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2533</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2533">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Walking Humanoids for Robotics Research</title>
  <summary type="xhtml">We present three humanoid robots aimed as platforms for research in robotics, and cognitive development in robotics systems. The 'priscilla' robot is a 180cm full scale humanoid, and the mid-size prototype is called 'elvis' and is about 70cm tall. The smallest size humanoid is the 'elvina' type, about 28 cm tall. Two instances of 'elvina' have been built to enable experiments with cooperating humanoids. The underlying ideas and conceptual principles, such as anthropomorphism, embodiment, and mechanisms for learning and adaptivity are introduced as well.</summary>
  <author>
    <name>Krister Wolff</name>
    <email/>
  </author>
  <author>
    <name>Peter Nordin</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2508/Atom/cogprints-eprint-2508.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2508"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2508/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2508/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2508"/>
  <published>2003-10-04Z</published>
  <updated>2011-03-11T08:55:03Z</updated>
  <id>http://cogprints.org/id/eprint/2508</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2508"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2508</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2508">
    <sword:depositedOn>2003-10-04Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Why it is important to build robots capable of doing science</title>
  <summary type="xhtml">Science, like any other cognitive activity, is grounded in the sensorimotor interaction of our bodies with the environment. Human embodiment thus constrains the class of scientific concepts and theories which are accessible to us. The paper explores the possibility of doing science with artificial cognitive agents, in the framework of an interactivist-constructivist cognitive model of science. Intelligent robots, by virtue of having different sensorimotor capabilities, may overcome the fundamental limitations of human science and provide important technological innovations. Mathematics and nanophysics are prime candidates for being studied by artificial scientists.</summary>
  <author>
    <name>Razvan V. Florian</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3739/Atom/cogprints-eprint-3739.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3739"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3739/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3739/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3739"/>
  <published>2004-08-06Z</published>
  <updated>2011-03-11T08:55:39Z</updated>
  <id>http://cogprints.org/id/eprint/3739</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3739"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3739</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3739">
    <sword:depositedOn>2004-08-06Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The Design and Implementation of a Bayesian CAD Modeler for Robotic Applications</title>
  <summary type="xhtml">We present a Bayesian CAD modeler for robotic applications. We address the problem of taking into account the propagation of geometric uncertainties when solving inverse geometric problems. The proposed method may be seen as a generalization of constraint-based approaches in which we explicitly model geometric uncertainties. Using our methodology, a geometric constraint is expressed as a probability distribution on the system parameters and the sensor measurements, instead of a simple equality or inequality. To solve geometric problems in this framework, we propose an original resolution method able to adapt to problem complexity.
Using two examples, we show how to apply our approach by providing simulation results using our modeler.</summary>
  <author>
    <name>Dr K Mekhnacha</name>
    <email/>
  </author>
  <author>
    <name>Dr E Mazer</name>
    <email/>
  </author>
  <author>
    <name>Dr P Bessiere</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/6279/Atom/cogprints-eprint-6279.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/6279"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/6279/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/6279/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/6279"/>
  <published>2008-11-23T09:12:25Z</published>
  <updated>2011-03-11T08:57:16Z</updated>
  <id>http://cogprints.org/id/eprint/6279</id>
  <category term="other" label="Other" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/6279"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/6279</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/6279">
    <sword:depositedOn>2008-11-23T09:12:25Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Introduction to a systemic theory of meaning</title>
  <summary type="xhtml">Information and meanings are present everywhere around us and within ourselves. &#13;
Specific studies have been implemented in order to link information and meaning: &#13;
- Semiotics/Biosemiotics &#13;
- Phenomenology &#13;
- Analytic Philosophy, linguistics&#13;
- Psychology &#13;
No general coverage is available for the notion of meaning.&#13;
We propose to complement this lack by a systemic approach to meaning generation</summary>
  <author>
    <name>Mr Christophe Menant</name>
    <email>christophe.menant@hotmail.fr</email>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2615/Atom/cogprints-eprint-2615.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2615"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2615/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2615/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2615"/>
  <published>2002-11-21Z</published>
  <updated>2011-03-11T08:55:06Z</updated>
  <id>http://cogprints.org/id/eprint/2615</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2615"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2615</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2615">
    <sword:depositedOn>2002-11-21Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Minds, Machines and Turing: The Indistinguishability of Indistinguishables</title>
  <summary type="xhtml">Turing's celebrated 1950 paper proposes a very general methodological criterion for modelling
     mental function: total functional equivalence and indistinguishability. His criterion gives rise to a hierarchy of
     Turing Tests, from subtotal ("toy") fragments of our functions (t1), to total symbolic (pen-pal) function (T2 --
     the standard Turing Test), to total external sensorimotor (robotic) function (T3), to total internal microfunction
     (T4), to total indistinguishability in every empirically discernible respect (T5). This is a "reverse-engineering"
     hierarchy of (decreasing) empirical underdetermination of the theory by the data. Level t1 is clearly too
     underdetermined, T2 is vulnerable to a counterexample (Searle's Chinese Room Argument), and T4 and T5 are
     arbitrarily overdetermined. Hence T3 is the appropriate target level for cognitive science. When it is reached,
     however, there will still remain more unanswerable questions than when Physics reaches its Grand Unified
     Theory of Everything (GUTE), because of the mind/body problem and the other-minds problem, both of which
     are inherent in this empirical domain, even though Turing hardly mentions them. 
</summary>
  <author>
    <name>Stevan Harnad</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1635/Atom/cogprints-eprint-1635.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1635"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1635/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1635/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1635"/>
  <published>2001-08-12Z</published>
  <updated>2011-03-11T08:54:43Z</updated>
  <id>http://cogprints.org/id/eprint/1635</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1635"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1635</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1635">
    <sword:depositedOn>2001-08-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Theory Grounding in Embodied Artificially Intelligent Systems</title>
  <summary type="xhtml">Theory grounding is suggested as a way to address the unresolved cognitive science issues of systematicity and productivity. Theory grounding involves grounding the theory skills and knowledge of an embodied artificially intelligent (AI) system by developing theory skills and knowledge from the bottom up. It is proposed that theory grounded AI systems should be patterned after the psychological developmental stages that infants and young children go through in acquiring naïve theories. Systematicity and productivity are properties of certain representational systems indicating the range of representations the systems can form. Systematicity and productivity are likely outcomes of theory grounded AI systems because systematicity and productivity are theoretical concepts. Theory grounded systems should be well oriented to acquire and develop these theoretical concepts.</summary>
  <author>
    <name>Christopher Prince</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1622/Atom/cogprints-eprint-1622.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1622"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1622/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1622/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1622"/>
  <published>2001-06-19Z</published>
  <updated>2011-03-11T08:54:42Z</updated>
  <id>http://cogprints.org/id/eprint/1622</id>
  <category term="bookchapter" label="Book Chapter" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1622"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1622</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1622">
    <sword:depositedOn>2001-06-19Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">What's Wrong and Right About Searle's Chinese Room Argument?</title>
  <summary type="xhtml">Searle's Chinese Room Argument showed a fatal flaw in computationalism
                               (the idea that mental states are just computational states) and helped usher in
                               the era of situated robotics and symbol grounding (although Searle himself
                               thought neuroscience was the only correct way to understand the mind).</summary>
  <author>
    <name>Stevan Harnad</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/4023/Atom/cogprints-eprint-4023.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/4023"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/4023/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/4023/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/4023"/>
  <published>2005-01-06Z</published>
  <updated>2011-03-11T08:55:49Z</updated>
  <id>http://cogprints.org/id/eprint/4023</id>
  <category term="bookchapter" label="Book Chapter" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/4023"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/4023</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/4023">
    <sword:depositedOn>2005-01-06Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">What's Wrong and Right About Searle's Chinese Room Argument?</title>
  <summary type="xhtml">Searle's Chinese Room Argument showed a fatal flaw in computationalism
                               (the idea that mental states are just computational states) and helped usher in
                               the era of situated robotics and symbol grounding (although Searle himself
                               thought neuroscience was the only correct way to understand the mind).</summary>
  <author>
    <name>Stevan Harnad</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1670/Atom/cogprints-eprint-1670.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1670"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1670/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1670/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1670"/>
  <published>2001-07-05Z</published>
  <updated>2011-03-11T08:54:44Z</updated>
  <id>http://cogprints.org/id/eprint/1670</id>
  <category term="techreport" label="Departmental Technical Report" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1670"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1670</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1670">
    <sword:depositedOn>2001-07-05Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Bayesian robot Programming</title>
  <summary type="xhtml">We propose a new method to program robots based on Bayesian inference and learning. The capacities of this programming method are demonstrated through a succession of increasingly complex experiments. Starting from the learning of simple reactive behaviors, we present instances of behavior combinations, sensor fusion, hierarchical behavior composition, situation recognition and temporal sequencing. This series of experiments comprises the steps in the incremental development of a complex robot program. The advantages and drawbacks of this approach are discussed along with these different experiments and summed up as a conclusion. These different robotics programs may be seen as an illustration of probabilistic programming applicable whenever one must deal with problems based on uncertain or incomplete knowledge. The scope of possible applications is obviously much broader than robotics.</summary>
  <author>
    <name>Olivier Lebeltel</name>
    <email/>
  </author>
  <author>
    <name>Pierre Bessiere</name>
    <email/>
  </author>
  <author>
    <name>Julien Diard</name>
    <email/>
  </author>
  <author>
    <name>Emmanuel Mazer</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3056/Atom/cogprints-eprint-3056.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3056"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3056/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3056/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3056"/>
  <published>2003-07-16Z</published>
  <updated>2011-03-11T08:55:18Z</updated>
  <id>http://cogprints.org/id/eprint/3056</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3056"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3056</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3056">
    <sword:depositedOn>2003-07-16Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Bootstrapping grounded symbols by minimal autonomous robots</title>
  <summary type="xhtml">In this paper an experiment is presented in which two mobile robots develop a shared lexicon of which the meanings are grounded in the real world. The robots start without a lexicon nor shared meanings and play language games in which they generate new meanings and negotiate words for these meanings. The experiment tries to find the minimal conditions under which verbal communication may begin to evolve. The robots are autonomous in terms of computing and cognition, but they are otherwise far simpler than most, if not all animals. It is demonstrated that a lexicon nevertheless can be made to emerge even though there are strong limits on the size and stability of this lexicon.</summary>
  <author>
    <name>Paul Vogt</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1296/Atom/cogprints-eprint-1296.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1296"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1296/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1296/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1296"/>
  <published>2001-02-12Z</published>
  <updated>2011-03-11T08:54:29Z</updated>
  <id>http://cogprints.org/id/eprint/1296</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1296"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1296</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1296">
    <sword:depositedOn>2001-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Duplication of modules facilitates the evolution of functional specialization</title>
  <summary type="xhtml">The evolution of simulated robots with three different architectures is studied. We compared a non-modular feed forward network, a hardwired modular and a duplication-based modular motor control network. We conclude that both modular architectures outperform the non-modular architecture, both in terms of rate of adaptation as well as the level of adaptation achieved. The main difference between the hardwired and duplication-based modular architectures is that in the latter the modules reached a much higher degree of functional specialization of their motor control units with regard to high level behavioral functions. The hardwired architectures reach the same level of performance, but have a more distributed assignment of functional tasks to the motor control units. We conclude that the mechanism through which functional specialization is achieved is similar to the mechanism proposed for the evolution of duplicated genes. It is found that the duplication of multifunctional modules first leads to a change in the regulation of the module, leading to a differentiation of the functional context in which the module is used. Then the module adapts to the new functional context. After this second step the system is locked into a functionally specialized state. We suggest that functional specialization may be an evolutionary absorption state.</summary>
  <author>
    <name>Raffaele Calabretta</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1304/Atom/cogprints-eprint-1304.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1304"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1304/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1304/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1304"/>
  <published>2001-02-12Z</published>
  <updated>2011-03-11T08:54:32Z</updated>
  <id>http://cogprints.org/id/eprint/1304</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1304"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1304</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1304">
    <sword:depositedOn>2001-02-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Duplication of modules facilitates the evolution of functional specialization</title>
  <summary type="xhtml">The evolution of simulated robots with three different architectures is studied. We compared a non-modular feed forward network, a hardwired modular and a duplication-based modular motor control network. We conclude that both modular architectures outperform the non-modular architecture, both in terms of rate of adaptation as well as the level of adaptation achieved. The main difference between the hardwired and duplication-based modular architectures is that in the latter the modules reached a much higher degree of functional specialization of their motor control units with regard to high level behavioral functions. The hardwired architectures reach the same level of performance, but have a more distributed assignment of functional tasks to the motor control units. We conclude that the mechanism through which functional specialization is achieved is similar to the mechanism proposed for the evolution of duplicated genes. It is found that the duplication of multifunctional modules first leads to a change in the regulation of the module, leading to a differentiation of the functional context in which the module is used. Then the module adapts to the new functional context. After this second step the system is locked into a functionally specialized state. We suggest that functional specialization may be an evolutionary absorption state.</summary>
  <author>
    <name>Raffaele Calabretta</name>
    <email/>
  </author>
  <author>
    <name>Stefano Nolfi</name>
    <email/>
  </author>
  <author>
    <name>Domenico Parisi</name>
    <email/>
  </author>
  <author>
    <name>Gunter P. Wagner</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3738/Atom/cogprints-eprint-3738.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3738"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3738/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3738/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3738"/>
  <published>2004-08-10Z</published>
  <updated>2011-03-11T08:55:39Z</updated>
  <id>http://cogprints.org/id/eprint/3738</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3738"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3738</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3738">
    <sword:depositedOn>2004-08-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">A Robotic CAD System using a Bayesian Framework</title>
  <summary type="xhtml">We present in this paper a Bayesian CAD system
for robotic applications. We address the problem of the
propagation of geometric uncertainties and how esian
CAD system for robotic applications. We address the
problem of the propagation of geometric uncertainties
and how to take this propagation into account when
solving inverse problems. We describe the methodology
we use to represent and handle uncertainties using
probability distributions on the system's parameters
and sensor measurements. It may be seen as a
generalization of constraint-based approaches where we
express a constraint as a probability distribution instead
of a simple equality or inequality. Appropriate
numerical algorithms used to apply this methodology
are also described. Using an example, we show how
to apply our approach by providing simulation results
using our CAD system.</summary>
  <author>
    <name>Dr K Mekhnacha</name>
    <email/>
  </author>
  <author>
    <name>Dr E Mazer</name>
    <email/>
  </author>
  <author>
    <name>Dr P Bessiere</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1478/Atom/cogprints-eprint-1478.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1478"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1478/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1478/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1478"/>
  <published>2001-05-08Z</published>
  <updated>2011-03-11T08:54:37Z</updated>
  <id>http://cogprints.org/id/eprint/1478</id>
  <category term="confposter" label="Conference Poster" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1478"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1478</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1478">
    <sword:depositedOn>2001-05-08Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Thinking Adaptive: Towards a Behaviours Virtual</title>
  <summary type="xhtml">In this paper we name some of the advantages of
virtual laboratories; and propose that a Behaviours
Virtual Laboratory should be useful for both biologists
and AI researchers, offering a new perspective for
understanding adaptive behaviour. We present our
development of a Behaviours Virtual Laboratory, which
at this stage is focused in action selection, and show
some experiments to illustrate the properties of our
proposal, which can be accessed via Internet.
</summary>
  <author>
    <name>Carlos Gershenson</name>
    <email/>
  </author>
  <author>
    <name>Pedro Pablo Gonzalez Perez</name>
    <email/>
  </author>
  <author>
    <name>Jose Negrete Martinez</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/545/Atom/cogprints-eprint-545.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/545"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/545/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/545/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/545"/>
  <published>1999-06-28Z</published>
  <updated>2011-03-11T08:54:02Z</updated>
  <id>http://cogprints.org/id/eprint/545</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/545"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/545</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/545">
    <sword:depositedOn>1999-06-28Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Challenging the Computational Metaphor: Implications for How We Think</title>
  <summary type="xhtml">This paper explores the role of the traditional computational metaphor in our thinking as computer scientists, its influence on epistemological styles, and its implications for our understanding of cognition. It proposes to replace the conventional metaphor--a sequence of steps--with the notion of a community of interacting entities, and examines the ramifications of such a shift on these various ways in which we think.</summary>
  <author>
    <name>Lynn Andrea Stein</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/535/Atom/cogprints-eprint-535.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/535"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/535/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/535/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/535"/>
  <published>1999-04-09Z</published>
  <updated>2011-03-11T08:54:02Z</updated>
  <id>http://cogprints.org/id/eprint/535</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/535"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/535</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/535">
    <sword:depositedOn>1999-04-09Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">DRAMA, a connectionist architecture for control and learning in autonomous robots</title>
  <summary type="xhtml">This work proposes a connectionist architecture, DRAMA, for dynamic control and learning of autonomous robots. DRAMA stands for dynamical recurrent associative memory architecture. It is a time-delay recurrent neural network, using Hebbian update rules. It allows learning of spatio-temporal regularities and time series in discrete sequences of inputs, in the face of an important amount of noise. The first part of this paper gives the mathematical description of the architecture and analyses theoretically and through numerical simulations its performance. The second part of this paper reports on the implementation of DRAMA in simulated and physical robotic experiments. Training and rehearsal of the DRAMA architecture is computationally fast and inexpensive, which makes the model particularly suitable for controlling `computationally-challenged' robots. In the experiments, we use a basic hardware system with very limited computational capability and show that our robot can carry out real time computation and on-line learning of relatively complex cognitive tasks. In these experiments, two autonomous robots wander randomly in a fixed environment, collecting information about its elements. By mutually associating information of their sensors and actuators, they learn about physical regularities underlying their experience of varying stimuli. The agents learn also from their mutual interactions. We use a teacher-learner scenario, based on mutual following of the two agents, to enable transmission of a vocabulary from one robot to the other.</summary>
  <author>
    <name>Aude Billard</name>
    <email/>
  </author>
  <author>
    <name>Gillian Hayes</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/139/Atom/cogprints-eprint-139.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/139"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/139/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/139/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/139"/>
  <published>2000-02-09Z</published>
  <updated>2011-03-11T08:53:41Z</updated>
  <id>http://cogprints.org/id/eprint/139</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/139"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/139</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/139">
    <sword:depositedOn>2000-02-09Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The theory of the organism-environment system: III. Role of efferent influences on receptors in the formation of knowledge.</title>
  <summary type="xhtml">The present article is an attempt to give - in the frame of the theory of the organism-environment system (Jarvilehto 1998a) - a new interpretation to the role of efferent influences on receptor activity and to the functions of senses in the formation of knowledge. It is argued, on the basis of experimental evidence and theoretical considerations, that the senses are not transmitters of environmental information, but they create a direct connection between the organism and the environment, which makes the development of a dynamic living system, the organism-environment system, possible. In this connection process the efferent influences on receptor activity are of particular significance, because with their help the receptors may be adjusted in relation to the parts of the environment which are most important in the achievement of behavioral results. Perception is the process of joining of new parts of the environment to the organism-environment system; thus, the formation of knowledge by perception is based on reorganization (widening and differentiation) of the organism-environment system, and not on transmission of information from the environment. With the help of the efferent influences on receptors each organism creates its own peculiar world which is simultaneously subjective and objective. The present considerations have far reaching influences as well on experimental work in neurophysiology and psychology of perception as on philosophical considerations of knowledge formation.</summary>
  <author>
    <name>Timo Jarvilehto</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/518/Atom/cogprints-eprint-518.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/518"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/518/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/518/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/518"/>
  <published>1998-10-22Z</published>
  <updated>2011-03-11T08:54:01Z</updated>
  <id>http://cogprints.org/id/eprint/518</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/518"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/518</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/518">
    <sword:depositedOn>1998-10-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Cerebellar Control of Robot Arms</title>
  <summary type="xhtml">Decades of research into the structure and function of the cerebellum have led to a clear understanding of many of its cells, as well as how learning takes place. Furthermore, there are many theories on what signals the cerebellum operates on, and how it works in concert with other parts of the nervous system. Nevertheless, the application of computational cerebellar models to the control of robot dynamics remains in its infant state. To date, a few applications have been realized, yet limited to the control of traditional robot structures which, strictly speaking, do not require adaptive control for the tasks that are performed since their dynamic structures are relatively simple. The currently emerging family of light-weight robots poses a new challenge to robot control: due to their complex dynamics traditional methods, depending on a full analysis of the dynamics of the system, are no longer applicable since the joints influence each other dynamics during movement. Can artificial cerebellar models compete here? In this overview paper we present a succinct introduction of the cerebellum, and discuss where it could be applied to tackle problems in robotics. Without conclusively answering the above question, an overview of several applications of cerebellar models to robot control is given.</summary>
  <author>
    <name>P. van der Smagt</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/506/Atom/cogprints-eprint-506.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/506"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/506/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/506/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/506"/>
  <published>1998-08-03Z</published>
  <updated>2011-03-11T08:54:00Z</updated>
  <id>http://cogprints.org/id/eprint/506</id>
  <category term="preprint" label="Preprint" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/506"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/506</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/506">
    <sword:depositedOn>1998-08-03Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Evolutionary Robotics: Exploiting the full power of self-organization</title>
  <summary type="xhtml">In this paper I claim that one of the main characteristics that makes the Evolutionary Robotics approach suitable for the study of adaptive behavior in natural and artificial agents is the possibility to rely largely on a self-organization process. Indeed by using Artificial Evolution the role of the designer may be limited to the specification of a fitness function which measures the ability of a given robot to perform a desired task. From an engineering point of view the main advantage of relying on self-organization is the fact that the designer does not need to divide the desired behavior into simple basic behaviors to be implemented into separate layers (or modules) of the robot control system. By selecting individuals for their ability to perform the desired behavior as a whole, simple basic behaviors can emerge from the interaction between several processes in the control system and from the interaction between the robot and the environment. From the point of view of the study of natural systems, the possibility of evolving robots that are free to select their way to solve a task by interacting with their environment may help us to understand how natural organisms produce adaptive behavior. Finally, the attempt to scale up to more complex tasks may help us to identify what the critical features of Natural Evolution are which allowed the emergence of the extraordinary variety of highly adapted life forms present on the planet.</summary>
  <author>
    <name>Stefano Nolfi</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1496/Atom/cogprints-eprint-1496.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1496"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1496/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1496/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1496"/>
  <published>2001-05-10Z</published>
  <updated>2011-03-11T08:54:38Z</updated>
  <id>http://cogprints.org/id/eprint/1496</id>
  <category term="journale" label="Journal (On-line/Unpaginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1496"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1496</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1496">
    <sword:depositedOn>2001-05-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Emergence and Categorization of Coordinated Visual Behavior Through Embodied Interaction</title>
  <summary type="xhtml">This paper discusses the emergence of sensorimotor coordination for ESCHeR, a 4DOF redundant foveated robot-head, by interaction with its environment. A feedback-error-learning(FEL)-based distributed control provides the system with  explorative abilities with reflexes constraining the learning space. A Kohonen network, trained at run-time, categorizes the sensorimotor patterns obtained over ESCHeR's interaction with its environment, enables the reinforcement of frequently executed actions, thus stabilizing the learning activity over time. We explain how the development of ESCHeR's visual abilities (namely gaze fixation and saccadic motion), from a context-free reflex-based control process to a context-dependent, pattern-based sensorimotor coordination can be related to the Piagetian 'stage theory'.</summary>
  <author>
    <name>Luc Berthouze</name>
    <email/>
  </author>
  <author>
    <name>Yasuo Kuniyoshi</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/700/Atom/cogprints-eprint-700.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/700"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/700/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/700/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/700"/>
  <published>1998-06-22Z</published>
  <updated>2011-03-11T08:54:12Z</updated>
  <id>http://cogprints.org/id/eprint/700</id>
  <category term="preprint" label="Preprint" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/700"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/700</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/700">
    <sword:depositedOn>1998-06-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">What sort of architecture is required for a human-like agent?</title>
  <summary type="xhtml">This paper is about how to give human-like powers to complete agents. For this the most important design choice concerns the overall architecture. Questions regarding detailed mechanisms, forms of representations, inference capabilities, knowledge etc. are best addressed in the context of a global architecture in which different design decisions need to be linked. Such a design would assemble various kinds of functionality into a complete coherent working system, in which there are many concurrent, partly independent, partly mutually supportive, partly potentially incompatible processes, addressing a multitude of issues on different time scales, including asynchronous, concurrent, motive generators. Designing human like agents is part of the more general problem of understanding design space, niche space and their interrelations, for, in the abstract, there is no one optimal design, as biological diversity on earth shows.</summary>
  <author>
    <name>Aaron Sloman</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/695/Atom/cogprints-eprint-695.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/695"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/695/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/695/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/695"/>
  <published>1998-06-22Z</published>
  <updated>2011-03-11T08:54:12Z</updated>
  <id>http://cogprints.org/id/eprint/695</id>
  <category term="preprint" label="Preprint" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/695"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/695</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/695">
    <sword:depositedOn>1998-06-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The evolution of what?</title>
  <summary type="xhtml">There is now a huge amount of interest in consciousness among scientists as well as philosophers, yet there is so much confusion and ambiguity in the claims and counter-claims that it is hard to tell whether any progress is being made. This ``position paper'' suggests that we can make progress by temporarily putting to one side questions about what consciousness is or which animals or machines have it or how it evolved. Instead we should focus on questions about the sorts of architectures that are possible for behaving systems and ask what sorts of capabilities, states and processes, might be supported by different sorts of architectures. We can then ask which organisms and machines have which sorts of architectures. This combines the standpoint of philosopher, biologist and engineer. If we can find a general theory of the variety of possible architectures (a characterisation of ``design space'') and the variety of environments, tasks and roles to which such architectures are well suited (a characterisation of ``niche space'') we may be able to use such a theory as a basis for formulating new more precisely defined concepts with which to articulate less ambiguous questions about the space of possible minds. For instance our initially ill-defined concept (``consciousness'') might split into a collection of more precisely defined concepts which can be used to ask unambiguous questions with definite answers. As a first step this paper explores a collection of conjectures regarding architectures and their evolution. In particular we explore architectures involving a combination of coexisting architectural levels including: (a) reactive mechanisms which evolved very early, (b) deliberative mechanisms which evolved later in response to pressures on information processing resources and (c) meta-management mechanisms that can explicitly inspect evaluate and modify some of the contents of various internal information structures. It is conjectured that in response to the needs of these layers, perceptual and action subsystems also developed layers, and also that an ``alarm'' system which initially existed only within the reactive layer may have become increasingly sophisticated and extensive as its inputs and outputs were linked to the newer layers. Processes involving the meta-management layer in the architecture could explain the origin of the notion of ``qualia''. Processes involving the ``alarm'' mechanism and mechanisms concerned with resource limits in the second and third layers gives us an explanation of three main forms of emotion, helping to account for some of the ambiguities which have bedevilled the study of emotion. Further theoretical and practical benefits may come from further work based on this design-based approach to consciousness. A deeper longer term implication is the possibility of a new science investigating laws governing possible trajectories in design space and niche space, as these form parts of high order feedback loops in the biosphere.</summary>
  <author>
    <name>Aaron Sloman</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3751/Atom/cogprints-eprint-3751.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3751"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3751/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3751/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3751"/>
  <published>2004-08-10Z</published>
  <updated>2011-03-11T08:55:39Z</updated>
  <id>http://cogprints.org/id/eprint/3751</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3751"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3751</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3751">
    <sword:depositedOn>2004-08-10Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The Ariadne's Clew Algorithm</title>
  <summary type="xhtml">We present a new approach to path planning, called the ``Ariadne's clew algorithm''. It is designed to find paths in high-dimensional continuous spaces and applies to robots with many degrees of freedom in static, as well as dynamic environments --- ones where obstacles may move. The Ariadne's clew algorithm comprises two sub-algorithms, called SEARCH and EXPLORE, applied in an interleaved manner. EXPLORE builds a representation of the accessible space while SEARCH looks for the target. Both are posed as optimization problems. We describe a real implementation of the algorithm to plan paths for a six degrees of freedom arm in a dynamic environment where another six degrees of freedom arm is used as a moving obstacle. Experimental results show that a path is found in about one second without any pre-processing.</summary>
  <author>
    <name>Dr E Mazer</name>
    <email/>
  </author>
  <author>
    <name>Dr J-M Ahuactzin</name>
    <email/>
  </author>
  <author>
    <name>Dr P Bessiere</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/802/Atom/cogprints-eprint-802.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/802"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/802/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/802/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/802"/>
  <published>1999-04-01Z</published>
  <updated>2011-03-11T08:54:17Z</updated>
  <id>http://cogprints.org/id/eprint/802</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/802"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/802</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/802">
    <sword:depositedOn>1999-04-01Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The Contribution of Society to the Construction of Individual Intelligence</title>
  <summary type="xhtml">It is argued that society is a crucial factor in the construction of individual intelligence. In other words that it is important that intelligence is socially situated in an analogous way to the physical situation of robots. Evidence that this may be the case is taken from developmental linguistics, the social intelligence hypothesis, the complexity of society, the need for self-reflection and autism. The consequences for the development of artificial social agents is briefly considered. Finally some challenges for research into socially situated intelligence are highlighted.</summary>
  <author>
    <name>Bruce Edmonds</name>
    <email/>
  </author>
  <author>
    <name>Kerstin Dautenhahn</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1106/Atom/cogprints-eprint-1106.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1106"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1106/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1106/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1106"/>
  <published>2000-11-15Z</published>
  <updated>2011-03-11T08:54:26Z</updated>
  <id>http://cogprints.org/id/eprint/1106</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1106"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1106</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1106">
    <sword:depositedOn>2000-11-15Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Learning sensory-motor cortical mappings without training</title>
  <summary type="xhtml">This paper shows how the relationship between two arrays of artificial neurons, representing different cortical regions, can be learned. The algorithm enables each neural network to self-organise into a topological map of the domain it represents at the same time as the relationship between these maps is found. Unlike previous methods learning is achieved without a separate training phase; the algorithm which learns the mapping is also that which performs the mapping.</summary>
  <author>
    <name>Michael Spratling</name>
    <email/>
  </author>
  <author>
    <name>Gillian Hayes</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/84/Atom/cogprints-eprint-84.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/84"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/84/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/84/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/84"/>
  <published>1999-05-08Z</published>
  <updated>2011-03-11T08:53:39Z</updated>
  <id>http://cogprints.org/id/eprint/84</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/84"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/84</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/84">
    <sword:depositedOn>1999-05-08Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Mixing Memory and Desire: Want and Will in Neural Modeling</title>
  <summary type="xhtml">Values are critical for intelligent behavior, since values determine interests, and interests determine relevance. Therefore we address relevance and its role in intelligent behavior in animals and machines. Animals avoid exhaustive enumeration of possibilities by focusing on relevant aspects of the environment, which emerge into the (cognitive) foreground, while suppressing irrelevant aspects, which submerge into the background. Nevertheless, the background is not invisible, and aspects of it can pop into the foreground if background processing deems them potentially relevant. Essential to these ideas are questions of how contexts are switched, which defines cognitive/behavioral episodes, and how new contexts are created, which allows the efficiency of foreground/background processing to be extended to new behaviors and cognitive domains. Next we consider mathematical characterizations of the foreground/background distinction, which we treat as a dynamic separation of the concrete space into (approximately) orthogonal subspaces, which are processed differently. Background processing is characterized by large receptive fields which project into a space of relatively low dimension to accomplish rough categorization of a novel stimulus and its approximate location. Such background processing is partly innate and partly learned, and we discuss possible correlational (Hebbian) learning mechanisms. Foreground processing is characterized by small receptive fields which project into a space of comparatively high dimension to accomplish precise categorization and localization of the stimuli relevant to the context. We also consider mathematical models of valences and affordances, which are an aspect of the foreground. Cells processing foregound information have no fixed meaning (i.e., their meaning is contextual), so it is necessary to explain how the processing accomplished by foreground neurons can be made relative to the context. Thus we consider the properties of several simple mathematical models of how the contextual representation controls foreground processing. We show how simple correlational processes accomplish the contextual separation of foreground from background on the basis of differential reinforcement. That is, these processes account for the contextual separation of the concrete space into disjoint subspaces corresponding to the foreground and background. Since an episode may comprise the activation of several contexts (at varying levels of activity) we consider models, suggested by quantum mechanics, of foreground processing in superposition. That is, the contextual state may be a weighted superposition of several pure contexts, with a corresponding superposition of the foreground representations and the processes operating on them. This leads us to a consideration of the nature and origin of contexts. Although some contexts are innate, many are learned. We discuss a mathematical model of contexts which allows a context to split into several contexts, agglutinate from several contexts, or to constellate out of relatively acontextual processing. Finally, we consider the acontextual processing which occurs when the current context is no longer relevant, and may trigger the switch to another context or the formation of a new context. We relate this to the situation known as "breakdown" in phenomenology.</summary>
  <author>
    <name>Bruce J. MacLennan</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/505/Atom/cogprints-eprint-505.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/505"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/505/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/505/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/505"/>
  <published>1998-07-31Z</published>
  <updated>2011-03-11T08:54:00Z</updated>
  <id>http://cogprints.org/id/eprint/505</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/505"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/505</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/505">
    <sword:depositedOn>1998-07-31Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Object recognition by matching symbolic edge graphs</title>
  <summary type="xhtml">We present an object recognition system based on symbolic graphs with object corners as vertices and outlines as edges. Corners are determined in a robust way by a multiscale combination of an operator modeling cortical end-stopped cells. Graphs are constructed by line-following between corners. Model matching is then done by finding subgraph isomorphisms in the image graph. The complexity is reduced by adding labels to corners and edges. The choice of labels makes the recognition system invariant under translation, rotation, and scaling.</summary>
  <author>
    <name>T. Lourens</name>
    <email/>
  </author>
  <author>
    <name>R.P. Würtz</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/694/Atom/cogprints-eprint-694.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/694"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/694/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/694/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/694"/>
  <published>1998-06-22Z</published>
  <updated>2011-03-11T08:54:12Z</updated>
  <id>http://cogprints.org/id/eprint/694</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/694"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/694</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/694">
    <sword:depositedOn>1998-06-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The ``Semantics'' of Evolution: Trajectories and Trade-offs in Design Space and Niche Space</title>
  <summary type="xhtml">This paper attempts to characterise a unifying overview of the practice of software engineers, AI designers, developers of evolutionary forms of computation, designers of adaptive systems, etc. The topic overlaps with theoretical biology, developmental psychology and perhaps some aspects of social theory. Just as much of theoretical computer science follows the lead of engineering intuitions and tries to formalise them, there are also some important emerging high level cross disciplinary ideas about natural information processing architectures and evolutionary mechanisms and that can perhaps be unified and formalised in the future. There is some speculation about the evolution of human cognitive architectures and consciousness.</summary>
  <author>
    <name>Aaron Sloman</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/716/Atom/cogprints-eprint-716.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/716"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/716/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/716/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/716"/>
  <published>1998-07-09Z</published>
  <updated>2011-03-11T08:54:13Z</updated>
  <id>http://cogprints.org/id/eprint/716</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/716"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/716</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/716">
    <sword:depositedOn>1998-07-09Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The ``Semantics'' of Evolution: Trajectories and Trade-offs in Design Space and Niche Space.</title>
  <summary type="xhtml">This paper attempts to characterise a unifying overview of the practice of software engineers, AI designers, developers of evolutionary forms of computation, designers of adaptive systems, etc. The topic overlaps with theoretical biology, developmental psychology and perhaps some aspects of social theory. Just as much of theoretical computer science follows the lead of engineering intuitions and tries to formalise them, there are also some important emerging high level cross disciplinary ideas about natural information processing architectures and evolutionary mechanisms and that can perhaps be unified and formalised in the future. There is some speculation about the evolution of human cognitive architectures and consciousness.</summary>
  <author>
    <name>A. Sloman</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/52/Atom/cogprints-eprint-52.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/52"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/52/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/52/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/52"/>
  <published>1998-07-03Z</published>
  <updated>2011-03-11T08:53:38Z</updated>
  <id>http://cogprints.org/id/eprint/52</id>
  <category term="techreport" label="Departmental Technical Report" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/52"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/52</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/52">
    <sword:depositedOn>1998-07-03Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Can artificial cerebellar models compete to control robots?</title>
  <summary type="xhtml">Contains extended abstracts of the NIPS*97 workshop "Can Artificial Models Compete to Control Robots?"</summary>
  <author>
    <name>P. van der Smagt</name>
    <email/>
  </author>
  <author>
    <name>D. Bullock</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1606/Atom/cogprints-eprint-1606.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1606"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1606/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1606/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1606"/>
  <published>2001-06-19Z</published>
  <updated>2011-03-11T08:54:42Z</updated>
  <id>http://cogprints.org/id/eprint/1606</id>
  <category term="newsarticle" label="Newspaper/Magazine Article" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1606"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1606</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1606">
    <sword:depositedOn>2001-06-19Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Lively Flights of Fancy</title>
  <summary type="xhtml">The topic is fascinating and the volume thought-provoking. It leaves one eager to hear who will ultimately win in
this game of reverse-engineering life: the computationalists, the roboticists or the naturalists. </summary>
  <author>
    <name>Stevan Harnad</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/701/Atom/cogprints-eprint-701.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/701"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/701/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/701/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/701"/>
  <published>1998-06-22Z</published>
  <updated>2011-03-11T08:54:12Z</updated>
  <id>http://cogprints.org/id/eprint/701</id>
  <category term="techreport" label="Departmental Technical Report" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/701"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/701</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/701">
    <sword:depositedOn>1998-06-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">MINDER1: An implementation of a protoemotional agent architecture</title>
  <summary type="xhtml">An implementation of an autonomous resource-bound agent able to operate in a simulated dynamic and complex domain is described. The agent, called MINDER1, is a partial realisation of an architecture for motive processing and attention. It is shown that a global processing state, called perturbance, can emerge from interactions of subcomponents of the architecture. Perturbant states are characteristic features of many states that are commonly called emotional. The agent is compared to other computer simulations of emotional phenomena.</summary>
  <author>
    <name>Ian Wright</name>
    <email/>
  </author>
  <author>
    <name>Aaron Sloman</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/168/Atom/cogprints-eprint-168.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/168"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/168/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/168/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/168"/>
  <published>1998-05-25Z</published>
  <updated>2011-03-11T08:53:42Z</updated>
  <id>http://cogprints.org/id/eprint/168</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/168"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/168</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/168">
    <sword:depositedOn>1998-05-25Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Epistemic Autonomy in Models of Living Systems</title>
  <summary type="xhtml">This paper discusses epistemological consequences of embodied AI for Artificial Life models. The importance of robotic systems for ALife lies in the fact that they are not purely formal models and thus have to address issues of semantic adaptation and epistemic autonomy, which means the system's own ability to decide upon the validity of measurements. Epistemic autonomy in artificial systems is a difficult problem that poses foundational questions. The proposal is to concentrate on biological transformations of epistemological questions that have lead to the development of modern ethology. Such an approach has proven to be useful in the design of control systems for behavior-based robots. It leads to a better understanding of modern ontological conceptions as well as a reacknowledgement of finality in the description and design of autonomous systems.</summary>
  <author>
    <name>Erich Prem</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/528/Atom/cogprints-eprint-528.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/528"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/528/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/528/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/528"/>
  <published>1999-01-07Z</published>
  <updated>2011-03-11T08:54:02Z</updated>
  <id>http://cogprints.org/id/eprint/528</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/528"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/528</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/528">
    <sword:depositedOn>1999-01-07Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Exploiting qualitative knowledge to enhance skill acquisition</title>
  <summary type="xhtml">One of the most interesting problems faced by Artificial Intelligence researchers is to reproduce a capability typical of living beings: that of learning to perform motor tasks, a problem known as skill acquisition. A very difficult purpose because the overwhole behavior of an agent is the result of quite a complex activity, involving sensory, planning and motor processing. In this paper, I present a novel approach for acquiring new skills, named Soft Teaching, that is characterized by a learning by experience process, in which an agent exploits a symbolic, qualitative description of the task to perform, that cannot, however, be used directly for control purposes. A specific Soft Teaching technique, named Symmetries, was implemented and tested against a continuous-domained version of well-known pole-balancing.</summary>
  <author>
    <name>Cristina Baroglio</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/479/Atom/cogprints-eprint-479.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/479"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/479/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/479/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/479"/>
  <published>1998-06-24Z</published>
  <updated>2011-03-11T08:53:59Z</updated>
  <id>http://cogprints.org/id/eprint/479</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/479"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/479</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/479">
    <sword:depositedOn>1998-06-24Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Perceptual grounding in robots</title>
  <summary type="xhtml">This paper reports on an experiment in which robotic agents are able to ground objects in their environment using low-level sensors. The reported experiment is part of a larger experiment, in which autonomous agents ground an adaptive language through self-organization. Grounding is achieved by the implementation of the hypothesis that meaning can be created using mechanisms like feature generation and self-organization. The experiments were carried out to investigate how agents may construct features in order to learn to discriminate objects from each other. Meaning is formed to give semantic value to the language, which is also created by the agents in the same experiments. From the experimental results we can conclude that the robots are able to ground meaning in this self-organizing manner. This paper focuses on the meaning creation and will only discuss the language formation very briefly. The paper sketches the tested hypothesis, the experimental set-up and experimental results.</summary>
  <author>
    <name>Paul Vogt</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/494/Atom/cogprints-eprint-494.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/494"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/494/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/494/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/494"/>
  <published>1998-07-03Z</published>
  <updated>2011-03-11T08:54:00Z</updated>
  <id>http://cogprints.org/id/eprint/494</id>
  <category term="bookchapter" label="Book Chapter" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/494"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/494</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/494">
    <sword:depositedOn>1998-07-03Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Teaching a robot to see how it moves</title>
  <summary type="xhtml">The positioning of a robot hand in order to grasp an object is a problem fundamental to robotics. The task we want to perform can be described as follows: given a visual scene the robot arm must reach an indicated point in that visual scene. This marked point indicates the observed object that has to be grasped. In order to accomplish this task, a mapping from the visual scene to the corresponding robot joint values must be available. The task set out in this chapter is to design a self-learning controller that constructs that mapping without knowledge of the geometry of the camera-robot system.</summary>
  <author>
    <name>P. van der Smagt</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/493/Atom/cogprints-eprint-493.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/493"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/493/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/493/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/493"/>
  <published>1998-07-03Z</published>
  <updated>2011-03-11T08:54:00Z</updated>
  <id>http://cogprints.org/id/eprint/493</id>
  <category term="bookchapter" label="Book Chapter" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/493"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/493</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/493">
    <sword:depositedOn>1998-07-03Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Visual feedback in motion</title>
  <summary type="xhtml">In this chapter we introduce a method for model-free monocular visual guidance of a robot arm. The robot arm, with a single camera in its end effector, should be positioned above a stationary target. It is shown that a trajectory can be planned in visual space by using components of the optic flow, and this trajector can be translated to joint torques by a self-learning neural network. No model of the robot, camera, or environment is used. The method reaches a high grasping accuracy after only a few trials.</summary>
  <author>
    <name>P. van der Smagt</name>
    <email/>
  </author>
  <author>
    <name>F. Groen</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/448/Atom/cogprints-eprint-448.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/448"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/448/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/448/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/448"/>
  <published>1998-06-09Z</published>
  <updated>2011-03-11T08:53:57Z</updated>
  <id>http://cogprints.org/id/eprint/448</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/448"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/448</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/448">
    <sword:depositedOn>1998-06-09Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Action Selection in a hypothetical house robot: Using those RL numbers</title>
  <summary type="xhtml">Reinforcement Learning (RL) methods, in contrast to many forms of machine learning, build up value functions for actions. That is, an agent not only knows `what' it wants to do, it also knows `how much' it wants to do it. Traditionally, the latter are used to produce the former and are then ignored, since the agent is assumed to act alone. But the latter numbers contain useful information - they tell us how much the agent will suffer if its action is not executed (perhaps not much). They tell us which actions the agent can compromise on and which it cannot. It is clear that many interesting systems possess multiple parallel and conflicting goals, all demanding attention, and none of which can be fully satisfied expect at the expense of others. Animals are the prime example of such systems. In [Humphrys, 1995], I introduced the W-learning algorithms, showing one method of resolving competition among behaviors automatically by reference to their RL values. The scheme has the unusal feature that behaviors are at all times in selfish pursuit of their own goals and have no explicit concept of cooperation, despite residing in the same body. In this paper, I apply W-learning to the world of a hypothetical house robot, which doubles as family toy, movile security camera, mobile smoke alarm and occasional vacuum cleaner. I show how a W-learning community of behaviors inside the robot will support a robust behavior pattern, capabable of opportunistic behavior, avoiding dithering, and allowing for the concept of default behavior and expression of low-priority goals.</summary>
  <author>
    <name>Mark Humphrys</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/447/Atom/cogprints-eprint-447.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/447"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/447/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/447/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/447"/>
  <published>1998-06-09Z</published>
  <updated>2011-03-11T08:53:57Z</updated>
  <id>http://cogprints.org/id/eprint/447</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/447"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/447</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/447">
    <sword:depositedOn>1998-06-09Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Action Selection methods using Reinforcement Learning</title>
  <summary type="xhtml">Action Selection schemes, when translated into precise algorithms, typically involve considerable design effort and tuning of parameters. Little work has been done on solving the problem using learning. This paper compares eight different methods of solving the action selection problem using Reinforcement Learning (learning from rewards). The methods range from centralised and cooperative to decentralised and selfish. They are tested in an artificial world and their performance, memory requirements and reactiveness are compared. Finally, the possibility of more exotic, ecosystem-like decentralised models are considered.</summary>
  <author>
    <name>Mark Humphrys</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/718/Atom/cogprints-eprint-718.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/718"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/718/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/718/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/718"/>
  <published>1998-07-18Z</published>
  <updated>2011-03-11T08:54:13Z</updated>
  <id>http://cogprints.org/id/eprint/718</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/718"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/718</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/718">
    <sword:depositedOn>1998-07-18Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Actual Possibilities</title>
  <summary type="xhtml">This is a philosophical `position paper', starting from the observation that we have an intuitive grasp of a family of related concepts of ``possibility'', ``causation'' and ``constraint'' which we often use in thinking about complex mechanisms, and perhaps also in perceptual processes, which according to Gibson are primarily concerned with detecting positive and negative affordances, such as support, obstruction, graspability, etc. We are able to talk about, think about, and perceive possibilities, such as possible shapes, possible pressures, possible motions, and also risks, opportunities and dangers. We can also think about constraints linking such possibilities. If such abilities are useful to us (and perhaps other animals) they may be equally useful to intelligent artefacts. All this bears on a collection of different more technical topics, including modal logic, constraint analysis, qualitative reasoning, naive physics, the analysis of functionality, and the modelling design processes. The paper suggests that our ability to use knowledge about ``de-re'' modality is more primitive than the ability to use ``de-dicto'' modalities, in which modal operators are applied to sentences. The paper explores these ideas, links them to notions of ``causation'' and ``machine'', suggests that they are applicable to virtual or abstract machines as well as physical machines. Some conclusions are drawn regarding the nature of mind and consciousness.</summary>
  <author>
    <name>A. Sloman</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/492/Atom/cogprints-eprint-492.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/492"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/492/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/492/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/492"/>
  <published>1998-07-03Z</published>
  <updated>2011-03-11T08:54:00Z</updated>
  <id>http://cogprints.org/id/eprint/492</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/492"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/492</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/492">
    <sword:depositedOn>1998-07-03Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Analysis and control of a rubbertuator arm</title>
  <summary type="xhtml">The control of light-weight compliant robot arms is cumbersome due to the fact that their Coriolis forces are large, and the forces exerted by the relatively weak actuators may change in time due to external (e.g., temperature) influences. We describe and analyse the behaviour of a light-weight robot arm, the SoftArm robot. It is found that the hysteretic force-position relationship of the arm can be explained from its structure. This knowledge is used in the construction of a neural-network based controller. Experiments show that the network is able to accurately control the robot arm after a training session of only a few minutes.</summary>
  <author>
    <name>P. van der Smagt</name>
    <email/>
  </author>
  <author>
    <name>F. Groen</name>
    <email/>
  </author>
  <author>
    <name>K. Schulten</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/704/Atom/cogprints-eprint-704.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/704"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/704/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/704/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/704"/>
  <published>1998-06-22Z</published>
  <updated>2011-03-11T08:54:12Z</updated>
  <id>http://cogprints.org/id/eprint/704</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/704"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/704</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/704">
    <sword:depositedOn>1998-06-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Beyond Turing Equivalence</title>
  <summary type="xhtml">What is the relation between intelligence and computation? Although the difficulty of defining `intelligence' is widely recognized, many are unaware that it is hard to give a satisfactory definition of `computational' if computation is supposed to provide a non-circular explanation for intelligent abilities. The only well-defined notion of `computation' is what can be generated by a Turing machine or a formally equivalent mechanism. This is not adequate for the key role in explaining the nature of mental processes, because it is too general, as many computations involve nothing mental, nor even processes: they are simply abstract structures. We need to combine the notion of `computation' with that of `machine'. This may still be too restrictive, if some non-computational mechanisms prove to be useful for intelligence. We need a theory-based taxonomy of {\em architectures} and {\em mechanisms} and corresponding process types. Computational machines my turn out to be a sub-class of the machines available for implementing intelligent agents. The more general analysis starts with the notion of a system with independently variable, causally interacting sub-states that have different causal roles, including both `belief-like' and `desire-like' sub-states, and many others. There are many significantly different such architectures. For certain architectures (including simple computers), some sub-states have a semantic interpretation for the system. The relevant concept of semantics is defined partly in terms of a kind of Tarski-like structural correspondence (not to be confused with isomorphism). This always leaves some semantic indeterminacy, which can be reduced by causal loops involving the environment. But the causal links are complex, can share causal pathways, and always leave mental states to some extent semantically indeterminate.</summary>
  <author>
    <name>Aaron Sloman</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/430/Atom/cogprints-eprint-430.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/430"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/430/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/430/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/430"/>
  <published>1998-03-22Z</published>
  <updated>2011-03-11T08:53:54Z</updated>
  <id>http://cogprints.org/id/eprint/430</id>
  <category term="preprint" label="Preprint" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/430"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/430</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/430">
    <sword:depositedOn>1998-03-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Did HAL Commit Murder?</title>
  <summary type="xhtml">The first robot homicide was committed in 1981, according to my files. I have a yellowed clipping dated 12/9/81 from the Philadelphia Inquirer--not the National Enquirer--with the headline:       Robot killed repairman, Japan reports   The story was an anti-climax: at the Kawasaki Heavy Industries plant in Akashi, a malfunctioning robotic arm pushed a repairman against a gearwheel-milling machine, crushing him to death. The repairman had failed to follow proper instructions for shutting down the arm before entering the workspace. Why, indeed, had this industrial accident in Japan been reported in a Philadelphia newspaper? Every day somewhere in the world a human worker is killed by one machine or another. The difference, of course, was that in the public imagination at least, this was no ordinary machine; this was a robot, a machine that might have a mind, might have evil intentions, might be capable not just of homicide but of murder.</summary>
  <author>
    <name>Daniel C Dennett</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/471/Atom/cogprints-eprint-471.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/471"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/471/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/471/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/471"/>
  <published>1998-06-22Z</published>
  <updated>2011-03-11T08:53:59Z</updated>
  <id>http://cogprints.org/id/eprint/471</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/471"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/471</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/471">
    <sword:depositedOn>1998-06-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">SIM_AGENT: A toolkit for exploring agent designs</title>
  <summary type="xhtml">SIM_AGENT is a toolkit that arose out of a project concerned with designing an architecture for an autonomous agent with human-like capabilities. Analysis of requirements showed a need to combine a wide variety of richly interacting mechanisms, including independent asynchronous sources of motivation and the ability to reflect on which motives to adopt, when to achieve them, how to achieve them, and so on. These internal `management' (and meta-management) processes involve a certain amount of parallelism, but resource limits imply the need for explicit control of attention. Such control problems can lead to emotional and other characteristically human affective states. In order to explore these ideas, we needed a toolkit to facilitate experiments with various architectures in various environments, including other agents. The paper outlines requirements and summarises the main design features of a Pop-11 toolkit supporting both rule-based and `sub-symbolic' mechanisms. Some experiments including hybrid architectures and genetic algorithms are summarised.</summary>
  <author>
    <name>Aaron Sloman</name>
    <email/>
  </author>
  <author>
    <name>Riccardo Poli</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/703/Atom/cogprints-eprint-703.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/703"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/703/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/703/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/703"/>
  <published>1998-06-22Z</published>
  <updated>2011-03-11T08:54:12Z</updated>
  <id>http://cogprints.org/id/eprint/703</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/703"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/703</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/703">
    <sword:depositedOn>1998-06-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Towards a Design-Based Analysis of Emotional Episodes</title>
  <summary type="xhtml">he design-based approach is a methodology for investigating mechanisms capable of generating mental phenomena, whether introspectively or externally observed, and whether they occur in humans, other animals or robots. The study of designs satisfying requirements for autonomous agency can provide new deep theoretical insights at the information processing level of description of mental mechanisms. Designs for working systems (whether on paper or implemented on computers) can systematically explicate old explanatory concepts and generate new concepts that allow new and richer interpretations of human phenomena. To illustrate this, some aspects of human grief are analysed in terms of a particular information processing architecture being explored in our research group. We do not claim that this architecture is part of the causal structure of the human mind; rather, it represents an early stage in the iterative search for a deeper and more general architecture, capable of explaining more phenomena. However even the current early design provides an interpretative ground for some familiar phenomena, including characteristic features of certain emotional episodes, particularly the phenomenon of perturbance (a partial or total loss of control of attention). The paper attempts to expound and illustrate the design-based approach to cognitive science and philosophy, to demonstrate the potential effectiveness of the approach in generating interpretative possibilities, and to provide first steps towards an information processing account of `perturbant', emotional episodes.</summary>
  <author>
    <name>Ian Wright</name>
    <email/>
  </author>
  <author>
    <name>Aaron Sloman</name>
    <email/>
  </author>
  <author>
    <name>Luc Beaudoin</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/2378/Atom/cogprints-eprint-2378.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/2378"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/2378/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/2378/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/2378"/>
  <published>2002-08-08Z</published>
  <updated>2011-03-11T08:54:58Z</updated>
  <id>http://cogprints.org/id/eprint/2378</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/2378"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/2378</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/2378">
    <sword:depositedOn>2002-08-08Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Uncalibrated visual servoing</title>
  <summary type="xhtml">Visual servoing is a process to enable a robot to position a camera with
respect to known landmarks using the visual data obtained by the camera
itself to guide camera motion. A solution is described which requires very
little a priori information freeing it from being specific to a particular
configuration of robot and camera. The solution is based on closed loop
control together with deliberate perturbations of the trajectory to provide
calibration movements for refining that trajectory.  Results from
experiments in simulation and on a physical robot arm (camera-in-hand
configuration) are presented.
</summary>
  <author>
    <name>M. W. Spratling</name>
    <email/>
  </author>
  <author>
    <name>R. Cipolla</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/702/Atom/cogprints-eprint-702.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/702"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/702/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/702/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/702"/>
  <published>1998-06-22Z</published>
  <updated>2011-03-11T08:54:12Z</updated>
  <id>http://cogprints.org/id/eprint/702</id>
  <category term="bookchapter" label="Book Chapter" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/702"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/702</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/702">
    <sword:depositedOn>1998-06-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">What sort of control system is able to have a personality?</title>
  <summary type="xhtml">This paper outlines a design-based methodology for the study of mind as a part of the broad discipline of Artificial Intelligence. Within that framework some architectural requirements for human-like minds are discussed, and some preliminary suggestions made regarding mechanisms underlying motivation, emotions, and personality. A brief description is given of the `Nursemaid' or `Minder' scenario being used at the University of Birmingham as a framework for research on these problems. It may be possible later to combine some of these ideas with work on synthetic agents inhabiting virtual reality environments.</summary>
  <author>
    <name>Aaron Sloman</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/248/Atom/cogprints-eprint-248.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/248"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/248/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/248/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/248"/>
  <published>1998-03-22Z</published>
  <updated>2011-03-11T08:53:46Z</updated>
  <id>http://cogprints.org/id/eprint/248</id>
  <category term="preprint" label="Preprint" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/248"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/248</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/248">
    <sword:depositedOn>1998-03-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Cog as a Thought Experiment</title>
  <summary type="xhtml">In her presentation at the Monte Verità workshop, Maja Mataric showed us a videotape of  her robots cruising together through the lab, and remarked, aptly: "They're flocking, but  that's not what they think they're doing." This is a vivid instance of a phenomenon that lies at  the heart of all the research I learned about at Monte Verità: the execution of surprisingly  successful "cognitive" behaviors by systems that did not explicitly represent, and did not  need to explicitly represent, what they were doing. How "high" in the intuitive scale of  cognitive sophistication can such unwitting prowess reach? All the way, apparently, since I  want to echo Maja's observation with one of my own: "These roboticists are doing philosophy, but that's not what they think they're doing." It is possible, then, even to do  philosophy--that most intellectual of activities--without realizing that that is what you are  doing. It is even possible to do it well, for this is a good, new way of addressing antique  philosophical puzzles.</summary>
  <author>
    <name>Daniel Dennett</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/452/Atom/cogprints-eprint-452.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/452"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/452/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/452/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/452"/>
  <published>1998-06-09Z</published>
  <updated>2011-03-11T08:53:58Z</updated>
  <id>http://cogprints.org/id/eprint/452</id>
  <category term="techreport" label="Departmental Technical Report" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/452"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/452</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/452">
    <sword:depositedOn>1998-06-09Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">W-learning: Competition among selfish Q-learners</title>
  <summary type="xhtml">W-learning is a self-organising action-selection scheme for systems with multiple parallel goals, such as autonomous mobile robots. It uses ideas drawn from the subsumption architecture for mobile robots (Brooks), implementing them with the Q-learning algorithm from reinforcement learning (Watkins). Brooks explores the idea of multiple sensing-and-acting agents within a single robot, more than one of which is capable of controlling the robot on its own if allowed. I introduce a model where the agents are not only autonomous, but are in fact engaged in direct competition with each other for control of the robot. Interesting robots are ones where no agent achieves total victory, but rather the state-space is fragmented among different agents. Having the agents operate by Q-learning proves to be a way to implement this, leading to a local, incremental algorithm (W-learning) to resolve competition. I present a sketch proof that this algorithm converges when the world is a discrete, finite Markov decision process. For each state, competition is resolved with the most likely winner of the state being the agent that is most likely to suffer the most if it does not win. In this way, W-learning can be viewed as `fair' resolution of competition. In the empirical section, I show how W-learning may be used to define spaces of agent-collections whose action selection is learnt rather than hand-designed. This is the kind of solution-space that may be searched with a genetic algorithm.</summary>
  <author>
    <name>Mark Humphrys</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/446/Atom/cogprints-eprint-446.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/446"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/446/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/446/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/446"/>
  <published>1998-06-03Z</published>
  <updated>2011-03-11T08:53:57Z</updated>
  <id>http://cogprints.org/id/eprint/446</id>
  <category term="bookchapter" label="Book Chapter" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/446"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/446</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/446">
    <sword:depositedOn>1998-06-03Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">A boy scout, Toto, and a bird: How situated cognition is different from situated robotics</title>
  <summary type="xhtml">We are at an exciting turning point in the development of intelligent machines. Situated robot designers (Maes, 1990) have given the AI community concrete examples of alternative architectures for coordinating sensation and action. These examples suggest that, for some navigation behaviors at least, predefined maps of the world and control structures are unnecessary. This work has developed in parallel with and lends credence to similar criticisms of models of human reasoning (Winograd and Flores, 1986; Suchman, 1987). However, it is crucial to understand that situated robotic designs are pragmatic, emphasizing engineering convenience and new ways of building machines. Brooks, et al. (1991) are not trying to model human beings, and to a significant degree their robotic designs violate situated cognition hypotheses about the nature of human knowledge and representation construction. I will sketch out some of these distinctions here, and suggest how they might be used to discover alternative architectures for robotics.</summary>
  <author>
    <name>William J. Clancey</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1595/Atom/cogprints-eprint-1595.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1595"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1595/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1595/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1595"/>
  <published>2001-06-19Z</published>
  <updated>2011-03-11T08:54:41Z</updated>
  <id>http://cogprints.org/id/eprint/1595</id>
  <category term="bookchapter" label="Book Chapter" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1595"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1595</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1595">
    <sword:depositedOn>2001-06-19Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Grounding Symbolic Capacity in Robotic Capacity</title>
  <summary type="xhtml">Depite considerations in favor of symbol grounding, neither pure
connectionism nor pure nonsymbolic robotics can be counted out yet, in
the path to robotic Turing Test. So far only computationalism and pure
AI have fallen by the wayside. If it turns out that no internal symbols
at all underlie our symbolic (email Turing Test) capacity, if dynamic
states of neural nets alone or sensorimotor mechanisms subserving
robotic capacities alone can successfully generate our full robotic
performance capacity without symbols, that is still the decisive test
for the presence of mind and everyone should be ready to accept the
verdict.  For even if we should happen to be wrong about such a robot,
it is clear that no one (not even an advocate of the stronger
neural-equivalence version of the Turing Test, nor even the Blind
Watchmaker who designed us but isno more a mind-reader than we are) can
ever hope to be the wiser.</summary>
  <author>
    <name>Stevan Harnad</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/450/Atom/cogprints-eprint-450.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/450"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/450/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/450/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/450"/>
  <published>1998-06-09Z</published>
  <updated>2011-03-11T08:53:57Z</updated>
  <id>http://cogprints.org/id/eprint/450</id>
  <category term="techreport" label="Departmental Technical Report" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/450"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/450</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/450">
    <sword:depositedOn>1998-06-09Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Towards self-organising Action Selection</title>
  <summary type="xhtml">Systems with multiple parallel goals (e.g. autonomous mobile robots) have a problem analogous to that of action selection in ethology. Architectures such as the subsumption architecture (Brooks) involve multiple sensing-and-acting agents within a single robot on its own if allowed. Which to give control at a given moment is normally regarded as a (difficult) problem of design. In a quest for a scheme where the agents decide for themselves in a sensible manner, I introduce a model where the agents are not only autonomous but are in full competition with each other for control of the robot. Interesting robots are ones where no agent achieves total victory, but rather a serires of compromises are reached. Having the agents operate by the reinforcement learning algorithm Q-learning (Watkins) allows the introduction of an algorithm called `W-learning', by which the agents learn to focus their competitive efforts in a manner similar to agents with limited spending power in an economy. In this way, the population of agents organises its own action selection in a coherent way that supports parallelism and opportunism. In the empirical section, I show how the relative influence an agent has on its robot may be controlled by adjusting its rewards. The possibility of automated search of agent-combinations is considered.</summary>
  <author>
    <name>Mark Humphrys</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/451/Atom/cogprints-eprint-451.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/451"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/451/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/451/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/451"/>
  <published>1998-06-09Z</published>
  <updated>2011-03-11T08:53:57Z</updated>
  <id>http://cogprints.org/id/eprint/451</id>
  <category term="preprint" label="Preprint" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/451"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/451</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/451">
    <sword:depositedOn>1998-06-09Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">W-learning: A simple RL-based Society of Mind</title>
  <summary type="xhtml">W-learning is a self-organising action-selection scheme for systems with multiple parallel goals, such as autonomous mobile robots. It uses ideas drawn from the subsumption architecture for mobile robots (Brooks), implementing them with the Q-learning algorithm from reinforcement learning (Watkins). Brooks explores the idea of multiple sensing-and-acting agents within a single robot, more than one of which is capable of controlling the robot on its own if allowed. I introduce a model where the agents are not only autonomous, but are in fact engaged in direct competition with each other for control of the robot. Interesting robots are ones where no agent achieves total victory, but rather the state-space is fragmented among different agents. Having the agents operate by Q-learning proves to be a way to implement this, leading to a local, incremental algorithm (W-learning) to resolve competition. I present a sketch proof that this algorithm converges when the world is a discrete, finite Markov decision process. For each state, competition is resolved with the most likely winner of the state being the agent that is most likely to suffer the most if it does not win. In this way, W-learining can be viewed as `fair' resolution of competition. In the empirical section, I show how W-learning may be used to define spaces of agent-collections whose action selection is learnt rather than hand-designed. This is the kind of solution-space that may be searched with a genetic algorithm.</summary>
  <author>
    <name>Mark Humphrys</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1592/Atom/cogprints-eprint-1592.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1592"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1592/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1592/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1592"/>
  <published>2001-06-18Z</published>
  <updated>2011-03-11T08:54:41Z</updated>
  <id>http://cogprints.org/id/eprint/1592</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1592"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1592</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1592">
    <sword:depositedOn>2001-06-18Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Computation Is Just Interpretable Symbol Manipulation: Cognition Isn't</title>
  <summary type="xhtml">Computation is interpretable symbol manipulation. Symbols are objects that are manipulated on the basis of rules
operating only on the symbols' shapes , which are arbitrary in relation to what they can be interpreted as meaning. Even if one
accepts the Church/Turing Thesis that computation is unique, universal and very near omnipotent, not everything is a computer,
because not everything can be given a systematic interpretation; and certainly everything can't be given every systematic
interpretation. But even after computers and computation have been successfully distinguished from other kinds of things, mental
states will not just be the implementations of the right symbol systems, because of the symbol grounding problem: The
interpretation of a symbol system is not intrinsic to the system; it is projected onto it by the interpreter. This is not true of our
thoughts. We must accordingly be more than just computers. My guess is that the meanings of our symbols are grounded in the
substrate of our robotic capacity to interact with that real world of objects, events and states of affairs that our symbols are
systematically interpretable as being about. </summary>
  <author>
    <name>Stevan Harnad</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/720/Atom/cogprints-eprint-720.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/720"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/720/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/720/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/720"/>
  <published>1998-07-18Z</published>
  <updated>2011-03-11T08:54:13Z</updated>
  <id>http://cogprints.org/id/eprint/720</id>
  <category term="preprint" label="Preprint" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/720"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/720</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/720">
    <sword:depositedOn>1998-07-18Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Computational Modelling Of Motive-Management Processes</title>
  <summary type="xhtml">This is a 5 page summary with three diagrams of the main objectives and some work in progress at the University of Birmingham Cognition and Affect project. involving: Professor Glyn Humphreys (School of Psychology), and Luc Beaudoin, Chris Paterson, Tim Read, Edmund Shing, Ian Wright, Ahmed El-Shafei, and (from October 1994) Chris Complin (research students). The project is concerned with "global" design requirements for coping simultaneously with coexisting but possibly unrelated goals, desires, preferences, intentions, and other kinds of motivators, all at different stages of processing. Our work builds on and extends seminal ideas of H.A.Simon (1967). We are exploring "broad and shallow" architectures combining varied capabilities most of which are not implemented in great depth. The poster summarises some ideas about management and meta-management processes, attention filtering, and the relevance to emotional states involved "perturbances", where there is partial loss of control of attention.</summary>
  <author>
    <name>A. Sloman</name>
    <email/>
  </author>
  <author>
    <name>L. Beaudouin</name>
    <email/>
  </author>
  <author>
    <name>I. Wright</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/429/Atom/cogprints-eprint-429.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/429"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/429/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/429/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/429"/>
  <published>1998-03-22Z</published>
  <updated>2011-03-11T08:53:54Z</updated>
  <id>http://cogprints.org/id/eprint/429</id>
  <category term="preprint" label="Preprint" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/429"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/429</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/429">
    <sword:depositedOn>1998-03-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Consciousness in Human and Robot Minds</title>
  <summary type="xhtml">The best reason for believing that robots might some day  become conscious is that we human beings are conscious, and we are a sort of robot ourselves. That is, we are extraordinarily  complex self-controlling, self-sustaining physical mechanisms, designed over the eons by natural selection, and operating  according to the same well-understood principles that govern  all the other physical processes in living things: digestive and metabolic processes, self-repair and reproductive processes, for instance. It may be wildly over-ambitious to suppose that human artificers can repeat Nature's triumph, with variations  in material, form, and design process, but this is not a deep objection. It is not as if a conscious machine contradicted any fundamental laws of nature, the way a perpetual motion  machine does. Still, many skeptics believe--or in any event want to believe--that it will never be done. I wouldn't wager against them, but my reasons for skepticism are mundane,  economic reasons, not theoretical reasons.</summary>
  <author>
    <name>Daniel C. Dennett</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1591/Atom/cogprints-eprint-1591.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1591"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1591/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1591/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1591"/>
  <published>2001-06-18Z</published>
  <updated>2011-03-11T08:54:41Z</updated>
  <id>http://cogprints.org/id/eprint/1591</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1591"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1591</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1591">
    <sword:depositedOn>2001-06-18Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life</title>
  <summary type="xhtml">Both Artificial Life and Artificial Mind are branches of what Dennett has called "reverse engineering": Ordinary engineering
attempts to build systems to meet certain functional specifications, reverse bioengineering attempts to understand how systems
that have already been built by the Blind Watchmaker work. Computational modelling (virtual life) can capture the formal
principles of life, perhaps predict and explain it completely, but it can no more be alive than a virtual forest fire can be hot. In
itself, a computational model is just an ungrounded symbol system; no matter how closely it matches the properties of what is
being modelled, it matches them only formally, with the mediation of an interpretation. Synthetic life is not open to this objection,
but it is still an open question how close a functional equivalence is needed in order to capture life. Close enough to fool the Blind
Watchmaker is probably close enough, but would that require molecular indistinguishability, and if so, do we really need to go
that far? </summary>
  <author>
    <name>Stevan Harnad</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/496/Atom/cogprints-eprint-496.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/496"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/496/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/496/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/496"/>
  <published>1998-07-03Z</published>
  <updated>2011-03-11T08:54:00Z</updated>
  <id>http://cogprints.org/id/eprint/496</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/496"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/496</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/496">
    <sword:depositedOn>1998-07-03Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The locally linear nested network for robot manipulation</title>
  <summary type="xhtml">We present a method for accurate representation of high-dimensional unknown functions from random samples drawn from its input space. The method builds representations of the function by recursively splitting the input space in smaller subspaces, while in each of these subspaces a linear approximation is computed. The representations of the function at all levels (i.e., depths in the tree) are retained during the learning process, such that a good generalisation is available as well as more accurate representations in some subareas. Therefore, fast and accurate learning are combined in this method. The method, which is applied to hand-eye coordination of a robot arm, is shown to be superior to other neural networks.</summary>
  <author>
    <name>P. van der Smagt</name>
    <email/>
  </author>
  <author>
    <name>F. Groen</name>
    <email/>
  </author>
  <author>
    <name>F. van het Groenewoud</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/721/Atom/cogprints-eprint-721.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/721"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/721/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/721/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/721"/>
  <published>1998-07-18Z</published>
  <updated>2011-03-11T08:54:13Z</updated>
  <id>http://cogprints.org/id/eprint/721</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/721"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/721</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/721">
    <sword:depositedOn>1998-07-18Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Semantics in an intelligent control system</title>
  <summary type="xhtml">Much research on intelligent systems has concentrated on low level mechanisms or sub-systems of restricted functionality. We need to understand how to put all the pieces together in an \ul{architecture} for a complete agent with its own mind, driven by its own desires. A mind is a self-modifying control system, with a hierarchy of levels of control, and a different hierarchy of levels of implementation. AI needs to explore alternative control architectures and their implications for human, animal, and artificial minds. Only within the framework of a theory of actual and possible architectures can we solve old problems about the concept of mind and causal roles of desires, beliefs, intentions, etc. The high level ``virtual machine'' architecture is more useful for this than detailed mechanisms. E.g. the difference between connectionist and symbolic implementations is of relatively minor importance. A good theory provides both explanations and a framework for systematically generating concepts of possible states and processes. Lacking this, philosophers cannot provide good analyses of concepts, psychologists and biologists cannot specify what they are trying to explain or explain it, and psychotherapists and educationalists are left groping with ill-understood problems. The paper sketches some requirements for such architectures, and analyses an idea shared between engineers and philosophers: the concept of ``semantic information''.</summary>
  <author>
    <name>A. Sloman</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1587/Atom/cogprints-eprint-1587.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1587"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1587/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1587/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1587"/>
  <published>2001-06-18Z</published>
  <updated>2011-03-11T08:54:41Z</updated>
  <id>http://cogprints.org/id/eprint/1587</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1587"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1587</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1587">
    <sword:depositedOn>2001-06-18Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Artificial Life: Synthetic Versus Virtual</title>
  <summary type="xhtml">Artificial life can take two forms: synthetic and virtual. In principle, the materials and properties of synthetic
living systems could differ radically from those of natural living systems yet still resemble them enough to be really alive if they
are grounded in the relevant causal interactions with the real world. Virtual (purely computational) "living" systems, in contrast,
are just ungrounded symbol systems that are systematically interpretable as if they were alive; in reality they are no more alive
than a virtual furnace is hot. Virtual systems are better viewed as "symbolic oracles" that can be used (interpreted) to predict and
explain real systems, but not to instantiate them. The vitalistic overinterpretation of virtual life is related to the animistic
overinterpretation of virtual minds and is probably based on an implicit (and possibly erroneous) intuition that living things have
actual or potential mental lives. </summary>
  <author>
    <name>Stevan Harnad</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/291/Atom/cogprints-eprint-291.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/291"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/291/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/291/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/291"/>
  <published>1998-05-05Z</published>
  <updated>2011-03-11T08:53:48Z</updated>
  <id>http://cogprints.org/id/eprint/291</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/291"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/291</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/291">
    <sword:depositedOn>1998-05-05Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Representations of knowing: In defense of cognitive apprenticeship.</title>
  <summary type="xhtml">Sandberg and Wielinga argue in their paper, "Situated Cognition: A paradigm shift?" that "there are no strong reasons to leave the traditional paradigm of cognitive science and AI." They are certainly correct that we should not "disregard evidence and achievements of Cognitive and Instructional Sciences." But they fail to appreciate the implications of the storehouse view of knowledge, which suggests that learning is like putting tools in a shed. Situated cognition arguments against traditional views of learning transfer suggest that human memory does not consist of stored facts and procedures. Perhaps because of the difficulty of this connectionand imagining what the alternative could beSandberg and Wielinga also misconstrue cognitive apprenticeship ("formal education should not just be replaced by 'cognitive apprenticeship'"). They misunderstand the idea of effectively relating formalized subject material to everyday practice, believing Collins et al. to be against the teaching of theories and generalities altogether, when in fact they favor AI applications to education. The practical implication of cognitive apprenticeship is to refocus instructional research on the design process itself: We should design computer systems in partnership with students, teachers, and practitioners in the context of use, so we can produce programs that people can afford and want to use, that promote creativity, and that relate in an honest, pragmatic way to everyday life.</summary>
  <author>
    <name>William J. Clancey</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/615/Atom/cogprints-eprint-615.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/615"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/615/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/615/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/615"/>
  <published>1998-03-20Z</published>
  <updated>2011-03-11T08:54:07Z</updated>
  <id>http://cogprints.org/id/eprint/615</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/615"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/615</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/615">
    <sword:depositedOn>1998-03-20Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The Symbol Grounding Problem</title>
  <summary type="xhtml">There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem:  How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations, which are analogs of the proximal sensory projections of distal objects and events, and (2) categorical representations, which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) symbolic representations, grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g., An X is a Y that is Z). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic module, however; the symbolic functions would emerge as an intrinsically dedicated symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded.</summary>
  <author>
    <name>Stevan Harnad</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/3106/Atom/cogprints-eprint-3106.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/3106"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/3106/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/3106/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/3106"/>
  <published>2003-08-12Z</published>
  <updated>2011-03-11T08:55:19Z</updated>
  <id>http://cogprints.org/id/eprint/3106</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/3106"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/3106</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/3106">
    <sword:depositedOn>2003-08-12Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The Symbol Grounding Problem</title>
  <summary type="xhtml">There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem:  How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations, which are analogs of the proximal sensory projections of distal objects and events, and (2) categorical representations, which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) symbolic representations, grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g., An X is a Y that is Z). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic module, however; the symbolic functions would emerge as an intrinsically dedicated symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded.</summary>
  <author>
    <name>Stevan Harnad</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/296/Atom/cogprints-eprint-296.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/296"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/296/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/296/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/296"/>
  <published>1998-06-01Z</published>
  <updated>2011-03-11T08:53:48Z</updated>
  <id>http://cogprints.org/id/eprint/296</id>
  <category term="confpaper" label="Conference Paper" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/296"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/296</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/296">
    <sword:depositedOn>1998-06-01Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">The frame of reference problem in cognitive modeling</title>
  <summary type="xhtml">Since at least the mid-70's there has been widespread agreement among cognitive science researchers that models of a problem-solving agent should incorporate its knowledge about the world and an inference procedure for interpreting this knowledge to construct plans and take actions. Research questions have focused on how knowledge is represented in computer programs and how such cognitive models can be verified in psychological experiments. We are now experiencing increasing confusion and misunderstanding as different critiques are leveled against this methodology and new jargon is introduced (e.g., "not rules," "ready-to-hand," "background," "situated," "subsymbolic"). Such divergent approaches put a premium on improving our understanding of past modeling methods, allowing us to more sharply contrast proposed alternatives. This paper compares and synthesizes new robotic research that is founded on the idea that knowledge does not consist of objective representations of the world. This research develops a new view of planning that distinguishes between a robot designer's ontological preconceptions, the dynamics of a robot's interaction with an environment, and an observer's descriptive theories of patterns in the robot's behavior. These frame-of-reference problems are illustrated here and unified by a new framework for describing cognitive models.</summary>
  <author>
    <name>William J. Clancey</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/1573/Atom/cogprints-eprint-1573.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/1573"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/1573/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/1573/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/1573"/>
  <published>2001-06-18Z</published>
  <updated>2011-03-11T08:54:40Z</updated>
  <id>http://cogprints.org/id/eprint/1573</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/1573"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/1573</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/1573">
    <sword:depositedOn>2001-06-18Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Minds, Machines and Searle</title>
  <summary type="xhtml">Searle's celebrated Chinese Room Argument has shaken the
foundations of Artificial Intelligence. Many refutations have been attempted, but
none seem convincing. This paper is an attempt to sort out explicitly the
assumptions and the logical, methodological and empirical points of disagreement.
Searle is shown to have underestimated some features of computer modeling, but
the heart of the issue turns out to be an empirical question about the scope and
limits of the purely symbolic (computational) model of the mind. Nonsymbolic
modeling turns out to be immune to the Chinese Room Argument. The issues
discussed include the Total Turing Test, modularity, neural modeling, robotics,
causality and the symbol-grounding problem. </summary>
  <author>
    <name>Stevan Harnad</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/705/Atom/cogprints-eprint-705.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/705"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/705/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/705/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/705"/>
  <published>1998-06-22Z</published>
  <updated>2011-03-11T08:54:12Z</updated>
  <id>http://cogprints.org/id/eprint/705</id>
  <category term="techreport" label="Departmental Technical Report" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/705"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/705</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/705">
    <sword:depositedOn>1998-06-22Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Why robots will have emotions</title>
  <summary type="xhtml">Emotions involve complex processes produced by interactions between motives, beliefs, percepts, etc. E.g. real or imagined fulfilment or violation of a motive, or triggering of a 'motive-generator', can disturb processes produced by other motives. To understand emotions, therefore, we need to understand motives and the types of processes they can produce. This leads to a study of the global architecture of a mind. Some constraints on the evolution of minds are disussed. Types of motives and the processes they generate are sketched.</summary>
  <author>
    <name>Aaron Sloman</name>
    <email/>
  </author>
  <author>
    <name>Monica Croucher</name>
    <email/>
  </author>
</entry>
<entry>
  <link rel="self" href="http://cogprints.org/cgi/export/eprint/499/Atom/cogprints-eprint-499.xml"/>
  <link rel="edit" href="http://cogprints.org/id/eprint/499"/>
  <link rel="edit-media" href="http://cogprints.org/id/eprint/499/contents"/>
  <link rel="contents" href="http://cogprints.org/id/eprint/499/contents"/>
  <link rel="alternate" href="http://cogprints.org/id/eprint/499"/>
  <published>1998-07-24Z</published>
  <updated>2011-03-11T08:54:00Z</updated>
  <id>http://cogprints.org/id/eprint/499</id>
  <category term="journalp" label="Journal (Paginated)" scheme="http://cogprints.org/data/eprint/type"/>
  <category term="archive" label="Live Archive" scheme="http://eprints.org/ep2/data/2.0/eprint/eprint_status"/>
  <link rel="http://purl.org/net/sword/terms/statement" href="http://cogprints.org/id/eprint/499"/>
  <sword:state href="http://eprints.org/ep2/data/2.0/eprint/eprint_status/archive"/>
  <sword:stateDescription>This item is in the repository with the URL: http://cogprints.org/id/eprint/499</sword:stateDescription>
  <sword:originalDeposit href="http://cogprints.org/id/eprint/499">
    <sword:depositedOn>1998-07-24Z</sword:depositedOn>
  </sword:originalDeposit>
  <title type="xhtml">Computing Machinery and Intelligence</title>
  <summary type="xhtml">I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"</summary>
  <author>
    <name>A. M. Turing</name>
    <email/>
  </author>
</entry>
</feed>
