http://cogprints.org/4990/
Robot Gesture Generation from Environmental
Sounds Using Inter-modality Mapping
We propose a motion generation model in
which robots presume the sound source of an
environmental sound and imitate its motion.
Sharing environmental sounds between humans
and robots enables them to share environmental
information. It is difficult to transmit
environmental sounds in human-robot
communications. We approached this problem
by focusing on the iconic gestures. Concretely,
robots presume the motion of the
sound source object and map it to the robot
motion. This method enabled robots to imitate
the motion of the sound source using
their bodies.
Hattori, Yuya
Kozima, Hideki
Komatani, Kazunori
Ogata, Tetsuya
Okuno, Hiroshi G.
Machine Learning
Robotics
Yuya
Hattori
Hideki
Kozima
Kazunori
Komatani
Tetsuya
Ogata
Hiroshi G.
Okuno