First-person Video Analysis for Evaluating Skill Level in the Humanitude Tender-Care Technique

Atsushi Nakazawa, Yu Mitsuzumi, Yuki Watanabe, Ryo Kurazume, Sakiko Yoshikawa, Miwako Honda

Research output: Contribution to journalArticle

Abstract

In this paper, we describe a wearable first-person video (FPV) analysis system for evaluating the skill levels of caregivers. This is a part of our project that aims to quantize and analyze the tender-care technique known as Humanitude by using wearable sensing and AI technology devices. Using our system, caregivers can evaluate and elevate their care levels by themselves. From the FPVs of care sessions taken by wearable cameras worn by caregivers, we obtained the 3D facial distance, pose and eye-contact states between caregivers and receivers by using facial landmark detection and deep neural network (DNN)-based eye contact detection. We applied statistical analysis to these features and developed algorithms that provide scores for tender-care skill. In experiments, we first evaluated the performance of our DNN-based eye contact detection by using eye contact datasets prepared from YouTube videos and FPVs that assume conversational scenes. We then performed skill evaluations by using Humanitude training scenes involving three novice caregivers, two Humanitude experts and seven middle-level students. The results showed that our eye contact detection outperformed existing methods and that our skill evaluations can estimate the care skill levels.

Original languageEnglish
JournalJournal of Intelligent and Robotic Systems: Theory and Applications
DOIs
Publication statusPublished - Jan 1 2019

Fingerprint

Statistical methods
Cameras
Students
Experiments
Deep neural networks

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Mechanical Engineering
  • Industrial and Manufacturing Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Cite this

First-person Video Analysis for Evaluating Skill Level in the Humanitude Tender-Care Technique. / Nakazawa, Atsushi; Mitsuzumi, Yu; Watanabe, Yuki; Kurazume, Ryo; Yoshikawa, Sakiko; Honda, Miwako.

In: Journal of Intelligent and Robotic Systems: Theory and Applications, 01.01.2019.

Research output: Contribution to journalArticle

@article{d63c8d5c64854ee9922c7081636ac803,
title = "First-person Video Analysis for Evaluating Skill Level in the Humanitude Tender-Care Technique",
abstract = "In this paper, we describe a wearable first-person video (FPV) analysis system for evaluating the skill levels of caregivers. This is a part of our project that aims to quantize and analyze the tender-care technique known as Humanitude by using wearable sensing and AI technology devices. Using our system, caregivers can evaluate and elevate their care levels by themselves. From the FPVs of care sessions taken by wearable cameras worn by caregivers, we obtained the 3D facial distance, pose and eye-contact states between caregivers and receivers by using facial landmark detection and deep neural network (DNN)-based eye contact detection. We applied statistical analysis to these features and developed algorithms that provide scores for tender-care skill. In experiments, we first evaluated the performance of our DNN-based eye contact detection by using eye contact datasets prepared from YouTube videos and FPVs that assume conversational scenes. We then performed skill evaluations by using Humanitude training scenes involving three novice caregivers, two Humanitude experts and seven middle-level students. The results showed that our eye contact detection outperformed existing methods and that our skill evaluations can estimate the care skill levels.",
author = "Atsushi Nakazawa and Yu Mitsuzumi and Yuki Watanabe and Ryo Kurazume and Sakiko Yoshikawa and Miwako Honda",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/s10846-019-01052-8",
language = "English",
journal = "Journal of Intelligent and Robotic Systems: Theory and Applications",
issn = "0921-0296",
publisher = "Springer Netherlands",

}

TY - JOUR

T1 - First-person Video Analysis for Evaluating Skill Level in the Humanitude Tender-Care Technique

AU - Nakazawa, Atsushi

AU - Mitsuzumi, Yu

AU - Watanabe, Yuki

AU - Kurazume, Ryo

AU - Yoshikawa, Sakiko

AU - Honda, Miwako

PY - 2019/1/1

Y1 - 2019/1/1

N2 - In this paper, we describe a wearable first-person video (FPV) analysis system for evaluating the skill levels of caregivers. This is a part of our project that aims to quantize and analyze the tender-care technique known as Humanitude by using wearable sensing and AI technology devices. Using our system, caregivers can evaluate and elevate their care levels by themselves. From the FPVs of care sessions taken by wearable cameras worn by caregivers, we obtained the 3D facial distance, pose and eye-contact states between caregivers and receivers by using facial landmark detection and deep neural network (DNN)-based eye contact detection. We applied statistical analysis to these features and developed algorithms that provide scores for tender-care skill. In experiments, we first evaluated the performance of our DNN-based eye contact detection by using eye contact datasets prepared from YouTube videos and FPVs that assume conversational scenes. We then performed skill evaluations by using Humanitude training scenes involving three novice caregivers, two Humanitude experts and seven middle-level students. The results showed that our eye contact detection outperformed existing methods and that our skill evaluations can estimate the care skill levels.

AB - In this paper, we describe a wearable first-person video (FPV) analysis system for evaluating the skill levels of caregivers. This is a part of our project that aims to quantize and analyze the tender-care technique known as Humanitude by using wearable sensing and AI technology devices. Using our system, caregivers can evaluate and elevate their care levels by themselves. From the FPVs of care sessions taken by wearable cameras worn by caregivers, we obtained the 3D facial distance, pose and eye-contact states between caregivers and receivers by using facial landmark detection and deep neural network (DNN)-based eye contact detection. We applied statistical analysis to these features and developed algorithms that provide scores for tender-care skill. In experiments, we first evaluated the performance of our DNN-based eye contact detection by using eye contact datasets prepared from YouTube videos and FPVs that assume conversational scenes. We then performed skill evaluations by using Humanitude training scenes involving three novice caregivers, two Humanitude experts and seven middle-level students. The results showed that our eye contact detection outperformed existing methods and that our skill evaluations can estimate the care skill levels.

UR - http://www.scopus.com/inward/record.url?scp=85068881457&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85068881457&partnerID=8YFLogxK

U2 - 10.1007/s10846-019-01052-8

DO - 10.1007/s10846-019-01052-8

M3 - Article

JO - Journal of Intelligent and Robotic Systems: Theory and Applications

JF - Journal of Intelligent and Robotic Systems: Theory and Applications

SN - 0921-0296

ER -