Real-time foreground segmentation from moving camera based on case-based trajectory classification

Yosuke Nonaka, Atsushi Shimada, Hajime Nagahara, Rin-Ichiro Taniguchi

Research output: Contribution to conferencePaper

2 Citations (Scopus)

Abstract

Recently, several methods for foreground segmentation from moving camera have been proposed. A trajectory-based method is one of typical approaches to segment video frames into foreground and background regions. The method obtains long term trajectories from entire of video frame and segments them by learning pixel or motion based object features. However, it often needs large amount of computational cost and memory resource to maintain trajectories. We present a trajectory-based method which aims for real-time foreground segmentation from moving camera. Unlike conventional methods, we use trajectories which are sparsely obtained from two successive video frames. In addition, our method enables using spatio-temporal feature of trajectories by introducing case-based approach to improve detection results. We compare our method with previous approaches and show results on challenging video sequences.

Original languageEnglish
Pages808-812
Number of pages5
DOIs
Publication statusPublished - Jan 1 2013
Event2013 2nd IAPR Asian Conference on Pattern Recognition, ACPR 2013 - Naha, Okinawa, Japan
Duration: Nov 5 2013Nov 8 2013

Other

Other2013 2nd IAPR Asian Conference on Pattern Recognition, ACPR 2013
CountryJapan
CityNaha, Okinawa
Period11/5/1311/8/13

Fingerprint

Cameras
Trajectories
Pixels
Data storage equipment
Costs

All Science Journal Classification (ASJC) codes

  • Computer Vision and Pattern Recognition

Cite this

Nonaka, Y., Shimada, A., Nagahara, H., & Taniguchi, R-I. (2013). Real-time foreground segmentation from moving camera based on case-based trajectory classification. 808-812. Paper presented at 2013 2nd IAPR Asian Conference on Pattern Recognition, ACPR 2013, Naha, Okinawa, Japan. https://doi.org/10.1109/ACPR.2013.146

Real-time foreground segmentation from moving camera based on case-based trajectory classification. / Nonaka, Yosuke; Shimada, Atsushi; Nagahara, Hajime; Taniguchi, Rin-Ichiro.

2013. 808-812 Paper presented at 2013 2nd IAPR Asian Conference on Pattern Recognition, ACPR 2013, Naha, Okinawa, Japan.

Research output: Contribution to conferencePaper

Nonaka, Y, Shimada, A, Nagahara, H & Taniguchi, R-I 2013, 'Real-time foreground segmentation from moving camera based on case-based trajectory classification', Paper presented at 2013 2nd IAPR Asian Conference on Pattern Recognition, ACPR 2013, Naha, Okinawa, Japan, 11/5/13 - 11/8/13 pp. 808-812. https://doi.org/10.1109/ACPR.2013.146
Nonaka Y, Shimada A, Nagahara H, Taniguchi R-I. Real-time foreground segmentation from moving camera based on case-based trajectory classification. 2013. Paper presented at 2013 2nd IAPR Asian Conference on Pattern Recognition, ACPR 2013, Naha, Okinawa, Japan. https://doi.org/10.1109/ACPR.2013.146
Nonaka, Yosuke ; Shimada, Atsushi ; Nagahara, Hajime ; Taniguchi, Rin-Ichiro. / Real-time foreground segmentation from moving camera based on case-based trajectory classification. Paper presented at 2013 2nd IAPR Asian Conference on Pattern Recognition, ACPR 2013, Naha, Okinawa, Japan.5 p.
@conference{101927e1151f42a3870acde0f1b972b1,
title = "Real-time foreground segmentation from moving camera based on case-based trajectory classification",
abstract = "Recently, several methods for foreground segmentation from moving camera have been proposed. A trajectory-based method is one of typical approaches to segment video frames into foreground and background regions. The method obtains long term trajectories from entire of video frame and segments them by learning pixel or motion based object features. However, it often needs large amount of computational cost and memory resource to maintain trajectories. We present a trajectory-based method which aims for real-time foreground segmentation from moving camera. Unlike conventional methods, we use trajectories which are sparsely obtained from two successive video frames. In addition, our method enables using spatio-temporal feature of trajectories by introducing case-based approach to improve detection results. We compare our method with previous approaches and show results on challenging video sequences.",
author = "Yosuke Nonaka and Atsushi Shimada and Hajime Nagahara and Rin-Ichiro Taniguchi",
year = "2013",
month = "1",
day = "1",
doi = "10.1109/ACPR.2013.146",
language = "English",
pages = "808--812",
note = "2013 2nd IAPR Asian Conference on Pattern Recognition, ACPR 2013 ; Conference date: 05-11-2013 Through 08-11-2013",

}

TY - CONF

T1 - Real-time foreground segmentation from moving camera based on case-based trajectory classification

AU - Nonaka, Yosuke

AU - Shimada, Atsushi

AU - Nagahara, Hajime

AU - Taniguchi, Rin-Ichiro

PY - 2013/1/1

Y1 - 2013/1/1

N2 - Recently, several methods for foreground segmentation from moving camera have been proposed. A trajectory-based method is one of typical approaches to segment video frames into foreground and background regions. The method obtains long term trajectories from entire of video frame and segments them by learning pixel or motion based object features. However, it often needs large amount of computational cost and memory resource to maintain trajectories. We present a trajectory-based method which aims for real-time foreground segmentation from moving camera. Unlike conventional methods, we use trajectories which are sparsely obtained from two successive video frames. In addition, our method enables using spatio-temporal feature of trajectories by introducing case-based approach to improve detection results. We compare our method with previous approaches and show results on challenging video sequences.

AB - Recently, several methods for foreground segmentation from moving camera have been proposed. A trajectory-based method is one of typical approaches to segment video frames into foreground and background regions. The method obtains long term trajectories from entire of video frame and segments them by learning pixel or motion based object features. However, it often needs large amount of computational cost and memory resource to maintain trajectories. We present a trajectory-based method which aims for real-time foreground segmentation from moving camera. Unlike conventional methods, we use trajectories which are sparsely obtained from two successive video frames. In addition, our method enables using spatio-temporal feature of trajectories by introducing case-based approach to improve detection results. We compare our method with previous approaches and show results on challenging video sequences.

UR - http://www.scopus.com/inward/record.url?scp=84899118694&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84899118694&partnerID=8YFLogxK

U2 - 10.1109/ACPR.2013.146

DO - 10.1109/ACPR.2013.146

M3 - Paper

AN - SCOPUS:84899118694

SP - 808

EP - 812

ER -