研究の興味/分野

元々は擬人化システムとして、ぬいぐるみロボットやバーチャルエージェントと人間のやりとりを研究していましたが、「バーチャル」や「コミュニケーションメディア」の拡大解釈をしながら人間の情報環境をデザインしています。 最終的には、仮想環境と人工的な存在の統合デザインと人間の感覚/知覚/認知への合理的アプローチを目指します。

Virtual Communication Media Designの構成

日本語発表文献より作成したword cloud:
発話身体付箋ウェアラブル流体移動伝達状況研究振付運転シンボル指向設計適応アンビエントブログ表現集中介在向上幹事視聴知覚携帯存在生物配置データスピーカ感情状態メモ指さし自発マインド付与他者挨拶カメラ感覚運動行動聴講サウンド協力理解接触高齢動的促進強度端末没入臨場擬人摩擦関係映像階層制御システムリズムモチベーション呼吸腹部検出頭部ポータブルトイレ同期交渉モーフィングレイヤーマルチモダリティマネージャ随意環境心拍傾聴現象仮想ブラウジング広告視覚文脈対話メディア空間障害ラベリング演奏内部外出投影集約分類方向装着ピクトグラム取得オノマトペ代理モデル芸術聴衆作曲実用TVモーダルインタラクティブデザインクロススキンシップ視点センサステップ学習楽器文字共有電話眼球メンバ擬音間接ゆるやか案内紹介親近人数ぬいぐるみ表情必要把握デバイス音楽インタフェース注意拡張周囲発声解析アレイ視線音声皮膚画像振動牽引モダリティ物理相対舞踊解像切替アニメーション注視動作アバター変化所有会話歌声検証親密社会応用講演音響印象変換対面オブジェクト幼児コミュニケーション音素パーソナル休憩視野生活推定集団看板構築ユーザ表示エージェントテキストマップ遠隔ノンバーバル義手相手インタラクション行動変容生理支援声色 ロボット
英語発表文献より作成したword cloud:
virtualStrengthbrowsingstickyEmbodiedAgentGestureOnomatopoeicInformativeEnvironmentalTrackingHapticuserStepwiseVoistickyAuditoryAssistingModalitybetweeninterfaceIkitomicalGazeParallelDirectionMusicAppearanceStuffed-toyPortableillusionAttentionCoordinationGraduallyElderlyNovicecareSkinLifeBreathManipulationInteractionSelectiveAppearancesIndirectvitalSuggestionSpacialGuideboardDetectionMembersone-to-manytextsRecording/BrowsingCrossmodalVibration-motorpartnerLimitationsUser-relativeExpressionsSoundmessageuser'sPervasivemotionBreathingVideoRemoteImmersiveDailySharableSpatialUtterancesDaily-partnerRecognitionSNSspaceMorphingpneumaticDesignObservationsupportChildren'sBehaviorsSensorSystemTangibleCombinationactuatorProtectedspeechManagingenormousExpressiveCalibrationsweblogInvestigationAcceptabilityPhysicalFlexiblePrivacyBodyHeadInvoluntarySingingInteractiveLivesCommunicationAvatarsBumpsARDeviceSensor-dollSpecialAmbientAwarenessEnvironmentarrayModelDegreePresenceVerbalUbiquitousFacialVoiceAttitude-awareMediaPersonalVRCameraRepresentationTimingAnthropomorphicReal-TimeFace-to-faceBoardEstimationGravityEnrobotmentpre-motionHumanAttachedConversationalSimplificationOutingUnderstandingSpectrogramGuidePersonifiedDolltactileExperienceEvaluationWearableActionsActiveMicrophoneExpressionProposalPropertiesrobot

(1) 仮想エージェントやロボットシステムとその応用

擬人化メディア(擬人的媒体)の意図表現と無意識表現を用いたコミュニケーション
We aim natural and expressive communication on robotics and agents as same as human-human communication. The most important focus of the research is to create reality of the artificial presence as a presence for the communication partner with human. There are many method to express your internal mind. Especially we use non-verbal expressions; gestures, eye-motions, or vocal intonations including vocal timbre, tone, strength. In this research we especially focus on stuffed-toy robots to be natural and familiar partners.
Keywords: robot, stuffed-toy interface, living-being-like expressions, elderly people support, life support, speech and music information processing, communication, multimodality

  1. Gaze-communicative stuffed-toy robot system based on remote gaze-tracking
  2. Crossmodality on expressive coordination of voice and gestures
  3. Hand-puppet interface for expressive singing voice control (Application System)
  4. Context-aware reactions of a sensor doll
  5. Nuance generation of speech/singing voice using morphing (STRAIGHT by Dr. Hideki Kawahara)
  6. Personal speech synthesis: minimum database of personal speech recording (in NTT)
  7. Virtual agents to be lively and real
  8. Virtual agents to support our lives: e-learning, daily lives, human-human communication

(2) Parallel communication media for multiple participants

Attitude, Personal Messaging, Directional Sound/ Localizaed Sound, Communication Media
In this project, We aim to enable parallel communication to the one-to-many (main) communication with localized/directional sound for personal messages. Not only to help such intended communication but also to elicit comprehensive tendencies of attitude/concentration of the participant from current subconscious understanding of the speaker, our project proposes the methods of i) the understanding of both the speaker and the participants, and 2) the generation of directional/localized sound or other signals for personal communication and perspective understanding of the enormous participants.
Keywords: participating attitude, perspective audience map, localized/directional sound, personal messages, communication-target switching, multimodality, multiple mobile devices

  1. Localized sound synthesis using multiple personal devices with direction and delay estimation among the devices for parallel communication (1)
  2. Musical BGM overlapping to lectures/talks for parallel communication (2)
  3. Analysing Lecturer's Speech, Pose and Intended Actions
  4. Perspective Understanding of the Participants
  5. AR Support for Understanding of Talk/Lecture Contents

(3) Communication model in multimodal analyses

Detailed analyses of the multimodal expressions in communication are expected to become an engine of understanding human-human communications. Internal model of the artificial presence is also expected to be

(4) Multimedia virtual environment

Recent Hot Topics in Our Research

  • Physical Contact of Robot to User
  • This research is supported in part by JSPS KAKENHI 15H01698

  • One-to-many Communication Support
  • This research is almost fully supported by JSPS KAKENHI 25700021

    through Sound AR

  • Involuntary Expressions from Robot
  • Personal Live-supporting Agent and Collaboration Support Agent
  • This research is supported in part by JSPS KAKENHIS KAKENHI 15H01698

    This research is supported by JSPS KAKENHI Personal E-Learning and Life Support

  • Detailed Models in Artifical Consciousness
  • Multimodal Communication Design for Elderly People

Currently Related Institutes/Researcher

  • Nagoya University: Dr. Kenji Mase Lab.
    • Dr. Takatsugu Hirayama
    • Dr. Yu Enokibori
  • Kyoto Institute of Technology: Dr. Noriaki Kuwahara Lab.
  • Personal Collaborations
    • Dr. Hirotake Yamazoe (Ritsumeikan University)
    • Dr. Hirotaka Osawa (Tsukuba University)
    • Dr. Irini Giannopulu (Bond University)

Past Themes