Research Fields

Our research fields seem to be spread out from the original purpose of "anthropomorphic robots as communicative media." The goal of our research is to create a total design of virtual environment and artificial presence as a subject.

Component of Virtual Communication Media Design

virtualStrengthbrowsingstickyEmbodiedAgentGestureOnomatopoeicInformativeEnvironmentalTrackingHapticuserStepwiseVoistickyAuditoryAssistingModalitybetweeninterfaceIkitomicalGazeParallelDirectionMusicAppearanceStuffed-toyPortableillusionAttentionCoordinationGraduallyElderlyNovicecareSkinLifeBreathManipulationInteractionSelectiveAppearancesIndirectvitalSuggestionSpacialGuideboardDetectionMembersone-to-manytextsRecording/BrowsingCrossmodalVibration-motorpartnerLimitationsUser-relativeExpressionsSoundmessageuser'sPervasivemotionBreathingVideoRemoteImmersiveDailySharableSpatialUtterancesDaily-partnerRecognitionSNSspaceMorphingpneumaticDesignObservationsupportChildren'sBehaviorsSensorSystemTangibleCombinationactuatorProtectedspeechManagingenormousExpressiveCalibrationsweblogInvestigationAcceptabilityPhysicalFlexiblePrivacyBodyHeadInvoluntarySingingInteractiveLivesCommunicationAvatarsBumpsARDeviceSensor-dollSpecialAmbientAwarenessEnvironmentarrayModelDegreePresenceVerbalUbiquitousFacialVoiceAttitude-awareMediaPersonalVRCameraRepresentationTimingAnthropomorphicReal-TimeFace-to-faceBoardEstimationGravityEnrobotmentpre-motionHumanAttachedConversationalSimplificationOutingUnderstandingSpectrogramGuidePersonifiedDolltactileExperienceEvaluationWearableActionsActiveMicrophoneExpressionProposalPropertiesrobot

(1) Anthropomorphic robot/agent systems and applications

Representation, Expression, and Communication on Anthropomorphic Media
We aim natural and expressive communication on robotics and agents as same as human-human communication. The most important focus of the research is to create reality of the artificial presence as a presence for the communication partner with human. There are many method to express your internal mind. Especially we use non-verbal expressions; gestures, eye-motions, or vocal intonations including vocal timbre, tone, strength. In this research we especially focus on stuffed-toy robots to be natural and familiar partners.
Keywords: robot, stuffed-toy interface, living-being-like expressions, elderly people support, life support, speech and music information processing, communication, multimodality

  1. Gaze-communicative stuffed-toy robot system based on remote gaze-tracking
  2. Crossmodality on expressive coordination of voice and gestures
  3. Hand-puppet interface for expressive singing voice control (Application System)
  4. Context-aware reactions of a sensor doll
  5. Nuance generation of speech/singing voice using morphing (STRAIGHT by Dr. Hideki Kawahara)
  6. Personal speech synthesis: minimum database of personal speech recording (in NTT)
  7. Virtual agents to be lively and real
  8. Virtual agents to support our lives: e-learning, daily lives, human-human communication

(2) Parallel communication media for multiple participants

Attitude, Personal Messaging, Directional Sound/ Localizaed Sound, Communication Media
In this project, We aim to enable parallel communication to the one-to-many (main) communication with localized/directional sound for personal messages. Not only to help such intended communication but also to elicit comprehensive tendencies of attitude/concentration of the participant from current subconscious understanding of the speaker, our project proposes the methods of i) the understanding of both the speaker and the participants, and 2) the generation of directional/localized sound or other signals for personal communication and perspective understanding of the enormous participants.
Keywords: participating attitude, perspective audience map, localized/directional sound, personal messages, communication-target switching, multimodality, multiple mobile devices

  1. Localized sound synthesis using multiple personal devices with direction and delay estimation among the devices for parallel communication (1)
  2. Musical BGM overlapping to lectures/talks for parallel communication (2)
  3. Analysing Lecturer's Speech, Pose and Intended Actions
  4. Perspective Understanding of the Participants
  5. AR Support for Understanding of Talk/Lecture Contents

(3) Communication model in multimodal analyses

Detailed analyses of the multimodal expressions in communication are expected to become an engine of understanding human-human communications. Internal model of the artificial presence is also expected to be

(4) Multimedia virtual environment

Recent Hot Topics in Our Research

  • Physical Contact of Robot to User
  • This research is supported in part by JSPS KAKENHI 15H01698

  • One-to-many Communication Support
  • This research is almost fully supported by JSPS KAKENHI 25700021

    through Sound AR

  • Involuntary Expressions from Robot
  • Personal Live-supporting Agent and Collaboration Support Agent
  • This research is supported in part by JSPS KAKENHIS KAKENHI 15H01698

    This research is supported by JSPS KAKENHI Personal E-Learning and Life Support

  • Detailed Models in Artifical Consciousness
  • Multimodal Communication Design for Elderly People

Currently Related Institutes/Researcher

  • Nagoya University: Dr. Kenji Mase Lab.
    • Dr. Takatsugu Hirayama
    • Dr. Yu Enokibori
  • Kyoto Institute of Technology: Dr. Noriaki Kuwahara Lab.
  • Personal Collaborations
    • Dr. Hirotake Yamazoe (Ritsumeikan University)
    • Dr. Hirotaka Osawa (Tsukuba University)
    • Dr. Irini Giannopulu (Bond University)

Past Themes