A compact descriptor CHOG3D and its application in human action recognition

Yanli Ji, Atsushi Shimada, Hajime Nagahara, Rin-Ichiro Taniguchi

研究成果: ジャーナルへの寄稿記事

4 引用 (Scopus)

抄録

In this paper, we propose a new method to calculate local features. We extend the FAST corner detector to the spatiotemporal space to extract the shape and motion information of human actions. And a compact peak-kept histogram of oriented spatiotemporal gradients (CHOG3D) is proposed to calculate local features. CHOG3D is calculated in a small support region of a feature point, and it employs the simplest gradient, the first-order gradient, for descriptor calculation. Through parameter training, the proper length of the CHOG3D is determined to be 80 elements, which is 1/12.5 times the dimension of HOG3D in the KTH dataset. In addition, it keeps the peak value of quantized gradient to represent human actions more exactly and distinguish them more correctly. CHOG3D overcomes the disadvantages of the complex calculation and huge length of the descriptor HOG3D. From a comparison of the computation cost, CHOG3D is 7.56 times faster than HOG3D in the KTH dataset. We apply the algorithm for human action recognition with support vector machine. The results show that our method outperforms HOG3D and some other currently used algorithms.

元の言語英語
ページ(範囲)69-77
ページ数9
ジャーナルIEEJ Transactions on Electrical and Electronic Engineering
8
発行部数1
DOI
出版物ステータス出版済み - 1 1 2013

Fingerprint

Support vector machines
Detectors
Costs

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

これを引用

A compact descriptor CHOG3D and its application in human action recognition. / Ji, Yanli; Shimada, Atsushi; Nagahara, Hajime; Taniguchi, Rin-Ichiro.

:: IEEJ Transactions on Electrical and Electronic Engineering, 巻 8, 番号 1, 01.01.2013, p. 69-77.

研究成果: ジャーナルへの寄稿記事

@article{c4bd43fc3ccb41bd930b2971ea37a821,
title = "A compact descriptor CHOG3D and its application in human action recognition",
abstract = "In this paper, we propose a new method to calculate local features. We extend the FAST corner detector to the spatiotemporal space to extract the shape and motion information of human actions. And a compact peak-kept histogram of oriented spatiotemporal gradients (CHOG3D) is proposed to calculate local features. CHOG3D is calculated in a small support region of a feature point, and it employs the simplest gradient, the first-order gradient, for descriptor calculation. Through parameter training, the proper length of the CHOG3D is determined to be 80 elements, which is 1/12.5 times the dimension of HOG3D in the KTH dataset. In addition, it keeps the peak value of quantized gradient to represent human actions more exactly and distinguish them more correctly. CHOG3D overcomes the disadvantages of the complex calculation and huge length of the descriptor HOG3D. From a comparison of the computation cost, CHOG3D is 7.56 times faster than HOG3D in the KTH dataset. We apply the algorithm for human action recognition with support vector machine. The results show that our method outperforms HOG3D and some other currently used algorithms.",
author = "Yanli Ji and Atsushi Shimada and Hajime Nagahara and Rin-Ichiro Taniguchi",
year = "2013",
month = "1",
day = "1",
doi = "10.1002/tee.21793",
language = "English",
volume = "8",
pages = "69--77",
journal = "IEEJ Transactions on Electrical and Electronic Engineering",
issn = "1931-4973",
publisher = "John Wiley and Sons Inc.",
number = "1",

}

TY - JOUR

T1 - A compact descriptor CHOG3D and its application in human action recognition

AU - Ji, Yanli

AU - Shimada, Atsushi

AU - Nagahara, Hajime

AU - Taniguchi, Rin-Ichiro

PY - 2013/1/1

Y1 - 2013/1/1

N2 - In this paper, we propose a new method to calculate local features. We extend the FAST corner detector to the spatiotemporal space to extract the shape and motion information of human actions. And a compact peak-kept histogram of oriented spatiotemporal gradients (CHOG3D) is proposed to calculate local features. CHOG3D is calculated in a small support region of a feature point, and it employs the simplest gradient, the first-order gradient, for descriptor calculation. Through parameter training, the proper length of the CHOG3D is determined to be 80 elements, which is 1/12.5 times the dimension of HOG3D in the KTH dataset. In addition, it keeps the peak value of quantized gradient to represent human actions more exactly and distinguish them more correctly. CHOG3D overcomes the disadvantages of the complex calculation and huge length of the descriptor HOG3D. From a comparison of the computation cost, CHOG3D is 7.56 times faster than HOG3D in the KTH dataset. We apply the algorithm for human action recognition with support vector machine. The results show that our method outperforms HOG3D and some other currently used algorithms.

AB - In this paper, we propose a new method to calculate local features. We extend the FAST corner detector to the spatiotemporal space to extract the shape and motion information of human actions. And a compact peak-kept histogram of oriented spatiotemporal gradients (CHOG3D) is proposed to calculate local features. CHOG3D is calculated in a small support region of a feature point, and it employs the simplest gradient, the first-order gradient, for descriptor calculation. Through parameter training, the proper length of the CHOG3D is determined to be 80 elements, which is 1/12.5 times the dimension of HOG3D in the KTH dataset. In addition, it keeps the peak value of quantized gradient to represent human actions more exactly and distinguish them more correctly. CHOG3D overcomes the disadvantages of the complex calculation and huge length of the descriptor HOG3D. From a comparison of the computation cost, CHOG3D is 7.56 times faster than HOG3D in the KTH dataset. We apply the algorithm for human action recognition with support vector machine. The results show that our method outperforms HOG3D and some other currently used algorithms.

UR - http://www.scopus.com/inward/record.url?scp=84871829813&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84871829813&partnerID=8YFLogxK

U2 - 10.1002/tee.21793

DO - 10.1002/tee.21793

M3 - Article

VL - 8

SP - 69

EP - 77

JO - IEEJ Transactions on Electrical and Electronic Engineering

JF - IEEJ Transactions on Electrical and Electronic Engineering

SN - 1931-4973

IS - 1

ER -