Estimation of Mental Health Quality of Life using Visual Information during Interaction with a Communication Agent

Satoshi Nakagawa, Shogo Yonekura, Hoshinori Kanazawa, Satoshi Nishikawa, Yasuo Kuniyoshi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

It is essential for a monitoring system or a communication robot that interacts with an elderly person to accurately understand the user's state and generate actions based on their condition. To ensure elderly welfare, quality of life (QOL) is a useful indicator for determining human physical suffering and mental and social activities in a comprehensive manner. In this study, we hypothesize that visual information is useful for extracting high-dimensional information on QOL from data collected by an agent while interacting with a person. We propose a QOL estimation method to integrate facial expressions, head fluctuations, and eye movements that can be extracted as visual information during the interaction with the communication agent. Our goal is to implement a multiple feature vectors learning estimator that incorporates convolutional 3D to learn spatiotemporal features. However, there is no database required for QOL estimation. Therefore, we implement a free communication agent and construct our database based on information collected through interpersonal experiments using the agent. To verify the proposed method, we focus on the estimation of the mental health QOL scale, which is the most difficult to estimate among the eight scales that compose QOL based on a previous study. We compare the four estimation accuracies: single-modal learning using each of the three features, i.e., facial expressions, head fluctuations, and eye movements and multiple feature vectors learning integrating all the three features. The experimental results show that multiple feature vectors learning has fewer estimation errors than all the other single-modal learning, which uses each feature separately. The experimental results for evaluating the difference between the estimated QOL score by the proposed method and the actual QOL score calculated by the conventional method also show that the average error is less than 10 points and, thus, the proposed system can estimate the QOL score. Thus, it is clear that the proposed new approach for estimating human conditions can improve the quality of human-robot interactions and personalized monitoring.

Original languageEnglish
Title of host publication29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1321-1327
Number of pages7
ISBN (Electronic)9781728160757
DOIs
Publication statusPublished - Aug 2020
Externally publishedYes
Event29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020 - Virtual, Naples, Italy
Duration: Aug 31 2020Sep 4 2020

Publication series

Name29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020

Conference

Conference29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020
CountryItaly
CityVirtual, Naples
Period8/31/209/4/20

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Human-Computer Interaction
  • Social Psychology
  • Communication

Fingerprint Dive into the research topics of 'Estimation of Mental Health Quality of Life using Visual Information during Interaction with a Communication Agent'. Together they form a unique fingerprint.

Cite this