Emotour: Multimodal emotion recognition using physiological and audio-visual features

Yuki Matsuda, Dmitrii Fedotov, Yuta Takahashi, Yutaka Arakawa, Keiichi Yasumoto, Wolfgang Minker

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

To design more context-aware systems for smart environments, especially smart cities, the psychological user status such as emotion should be considered in addition to environmental information. In this study, we focus on the tourism domain as a typical use case, and propose a multimodal tourist emotion recognition method during the sightseeing. We employ behavioural cues (eye and head/body movements) and audio-visual features to recognise emotion. As a result of real-world experiments with tourists, we achieved up to 0.71 of average recall score in 3-class emotion recognition task with feature level fusion.

Original languageEnglish
Title of host publicationUbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers
PublisherAssociation for Computing Machinery, Inc
Pages946-951
Number of pages6
ISBN (Electronic)9781450359665
DOIs
Publication statusPublished - Oct 8 2018
Externally publishedYes
Event2018 Joint ACM International Conference on Pervasive and Ubiquitous Computing, UbiComp 2018 and 2018 ACM International Symposium on Wearable Computers, ISWC 2018 - Singapore, Singapore
Duration: Oct 8 2018Oct 12 2018

Publication series

NameUbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers

Other

Other2018 Joint ACM International Conference on Pervasive and Ubiquitous Computing, UbiComp 2018 and 2018 ACM International Symposium on Wearable Computers, ISWC 2018
CountrySingapore
CitySingapore
Period10/8/1810/12/18

All Science Journal Classification (ASJC) codes

  • Software
  • Human-Computer Interaction
  • Information Systems

Fingerprint Dive into the research topics of 'Emotour: Multimodal emotion recognition using physiological and audio-visual features'. Together they form a unique fingerprint.

  • Cite this

    Matsuda, Y., Fedotov, D., Takahashi, Y., Arakawa, Y., Yasumoto, K., & Minker, W. (2018). Emotour: Multimodal emotion recognition using physiological and audio-visual features. In UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers (pp. 946-951). (UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers). Association for Computing Machinery, Inc. https://doi.org/10.1145/3267305.3267687