Lifelogging caption generation via fourth-person vision in a human–robot symbiotic environment

Kazuto Nakashima, Yumi Iwashita, Ryo Kurazume

研究成果: ジャーナルへの寄稿学術誌査読


Automatic analysis of our daily lives and activities through a first-person lifelog camera provides us with opportunities to improve our life rhythms or to support our limited visual memories. Notably, to express the visual experiences, the task of generating captions from first-person lifelog images has been actively studied in recent years. First-person images involve scenes approximating what users actually see; therein, the visual cues are not enough to express the user’s context since the images are limited by his/her intention. Our challenge is to generate lifelog captions using a meta-perspective called “fourth-person vision”. The “fourth-person vision” is a novel concept which complementary exploits the visual information from the first-, second-, and third-person perspectives. First, we assume human–robot symbiotic scenarios that provide a second-person perspective from the camera mounted on the robot and a third-person perspective from the camera fixed in the symbiotic room. To validate our approach in this scenario, we collect perspective-aware lifelog videos and corresponding caption annotations. Subsequently, we propose a multi-perspective image captioning model composed of an image-wise salient region encoder, an attention module that adaptively fuses the salient regions, and a caption decoder that generates scene descriptions. We demonstrate that our proposed model based on the fourth-person concept can greatly improve the captioning performance against single- and double-perspective models.

ジャーナルROBOMECH Journal
出版ステータス出版済み - 12月 1 2020

!!!All Science Journal Classification (ASJC) codes

  • モデリングとシミュレーション
  • 器械工学
  • 機械工学
  • 制御と最適化
  • 人工知能


「Lifelogging caption generation via fourth-person vision in a human–robot symbiotic environment」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。