<descriptionSet data-view:transformation="http://purl.org/eprint/epdcx/xslt/2006-11-16/epdcx2rdfxml.xsl" xsi:schemaLocation="http://purl.org/eprint/epdcx/2006-11-16/ http://purl.org/eprint/epdcx/xsd/2006-11-16/epdcx.xsd" xmlns="http://purl.org/eprint/epdcx/2006-11-16/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:data-view="http://www.w3.org/2003/g/data-view#">
  <description resourceURI="http://cogprints.org/1578/">
    <statement propertyURI="http://purl.org/dc/elements/1.1/type" valueURI="http://purl.org/eprint/entityType/ScholarlyWork"></statement>
    <statement propertyURI="http://purl.org/dc/elements/1.1/identifier">
      <valueString sesURI="http://purl.org/dc/terms/URI">http://cogprints.org/1578/</valueString>
    </statement>
    <statement propertyURI="http://purl.org/dc/elements/1.1/title">
      <valueString>Other bodies, Other minds: A machine incarnation of an old philosophical problem</valueString>
    </statement>
    <statement propertyURI="http://purl.org/dc/terms/abstract">
      <valueString>Explaining the mind by building machines with minds runs into the
other-minds problem: How can we tell whether any body other than our own has a
mind when the only way to know is by being the other body? In practice we all use
some form of Turing Test: If it can do everything a body with a mind can do such
that we can't tell them apart, we have no basis for doubting it has a mind. But what
is "everything" a body with a mind can do? Turing's original "pen-pal" version (the
TT) only tested linguistic capacity, but Searle has shown that a mindless
symbol-manipulator could pass the TT undetected. The Total Turing Test (TTT)
calls for all of our linguistic and robotic capacities; immune to Searle's argument, it
suggests how to ground a symbol manipulating system in the capacity to pick out
the objects its symbols refer to. No Turing Test, however, can guarantee that a
body has a mind. Worse, nothing in the explanation of its successful performance
requires a model to have a mind at all. Minds are hence very different from the
unobservables of physics (e.g., superstrings); and Turing Testing, though essential
for machine-modeling the mind, can really only yield an explanation of the body.</valueString>
    </statement>
    <statement valueRef="id1" propertyURI="http://purl.org/dc/elements/1.1/creator">
      <valueString>Harnad, Stevan</valueString>
    </statement>
    <statement propertyURI="http://purl.org/dc/elements/1.1/subject" vesURI="http://purl.org/dc/terms/LCSH">
      <valueString>Cognitive Psychology</valueString>
    </statement>
    <statement propertyURI="http://purl.org/dc/elements/1.1/subject" vesURI="http://purl.org/dc/terms/LCSH">
      <valueString>Artificial Intelligence</valueString>
    </statement>
    <statement propertyURI="http://purl.org/dc/elements/1.1/subject" vesURI="http://purl.org/dc/terms/LCSH">
      <valueString>Philosophy of Mind</valueString>
    </statement>
  </description>
  <description resourceId="id1">
    <statement propertyURI="http://purl.org/dc/elements/1.1/type" valueURI="http://purl.org/eprint/entityType/Person"></statement>
    <statement propertyURI="http://xmlns.com/foaf/0.1/givenname">
      <valueString>Stevan</valueString>
    </statement>
    <statement propertyURI="http://xmlns.com/foaf/0.1/familyname">
      <valueString>Harnad</valueString>
    </statement>
  </description>
</descriptionSet>