TY - GEN
T1 - M3B Corpus
T2 - 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and 2019 ACM International Symposium on Wearable Computers, UbiComp/ISWC 2019
AU - Soneda, Yusuke
AU - Matsuda, Yuki
AU - Arakawa, Yutaka
AU - Yasumoto, Keiichi
PY - 2019/9/9
Y1 - 2019/9/9
N2 - This paper is the first trial to create a corpus on human-to-human multi-modal communication among multiple persons in group discussions. Our corpus includes not only video conversations but also the head movement and eye gaze. In addition, it includes detailed labels about the behaviors appeared in the discussion. Since we focused on the micro-behavior, we classified the general behavior into more detailed behaviors based on those meaning. For example, we have four types of smile: response, agree, interesting, sympathy. Because it takes much effort to create such corpus having multiple sensor data and detailed labels, it seems that no one has created it. In this work, we first attempted to create a corpus called “M3B Corpus (Multi-Modal Meeting Behavior Corpus),” which includes 320 minutes discussion among 21 Japanese students in total by developing the recording system that can handle multiple sensors and 360-degree camera simultaneously and synchronously. In this paper, we introduce our developed recording system and report the detail of M3B Corpus.
AB - This paper is the first trial to create a corpus on human-to-human multi-modal communication among multiple persons in group discussions. Our corpus includes not only video conversations but also the head movement and eye gaze. In addition, it includes detailed labels about the behaviors appeared in the discussion. Since we focused on the micro-behavior, we classified the general behavior into more detailed behaviors based on those meaning. For example, we have four types of smile: response, agree, interesting, sympathy. Because it takes much effort to create such corpus having multiple sensor data and detailed labels, it seems that no one has created it. In this work, we first attempted to create a corpus called “M3B Corpus (Multi-Modal Meeting Behavior Corpus),” which includes 320 minutes discussion among 21 Japanese students in total by developing the recording system that can handle multiple sensors and 360-degree camera simultaneously and synchronously. In this paper, we introduce our developed recording system and report the detail of M3B Corpus.
UR - http://www.scopus.com/inward/record.url?scp=85072897010&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85072897010&partnerID=8YFLogxK
U2 - 10.1145/3341162.3345588
DO - 10.1145/3341162.3345588
M3 - Conference contribution
T3 - UbiComp/ISWC 2019- - Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers
SP - 825
EP - 834
BT - UbiComp/ISWC 2019- - Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers
PB - Association for Computing Machinery, Inc
Y2 - 9 September 2019 through 13 September 2019
ER -