TY - GEN
T1 - Background Subtraction Network Module Ensemble for Background Scene Adaptation
AU - Hamada, Taiki
AU - Minematsu, Tsubasa
AU - Simada, Atsushi
AU - Okubo, Fumiya
AU - Taniguchi, Yuta
N1 - Funding Information:
This work was supported by JSPS KAKENHI Grant Number JP21K17864.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Background subtraction networks outperform traditional hand-craft background subtraction methods. The main advantage of background subtraction networks is their ability to automatically learn background features for training scenes. When applying the trained network to new target scenes, adapting the network to the new scenes is crucial. However, few studies have focused on reusing multiple trained models for new target scenes. Considering background changes have several categories, such as illumination changes, a model trained for each background scene can work effectively for the target scene similar to the training scene. In this study, we propose a method to ensemble the module networks trained for each background scene. Experimental results show that the proposed method is significantly more accurate compared with the conventional methods in the target scene by tuning with only a few frames.
AB - Background subtraction networks outperform traditional hand-craft background subtraction methods. The main advantage of background subtraction networks is their ability to automatically learn background features for training scenes. When applying the trained network to new target scenes, adapting the network to the new scenes is crucial. However, few studies have focused on reusing multiple trained models for new target scenes. Considering background changes have several categories, such as illumination changes, a model trained for each background scene can work effectively for the target scene similar to the training scene. In this study, we propose a method to ensemble the module networks trained for each background scene. Experimental results show that the proposed method is significantly more accurate compared with the conventional methods in the target scene by tuning with only a few frames.
UR - http://www.scopus.com/inward/record.url?scp=85143905004&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85143905004&partnerID=8YFLogxK
U2 - 10.1109/AVSS56176.2022.9959316
DO - 10.1109/AVSS56176.2022.9959316
M3 - Conference contribution
AN - SCOPUS:85143905004
T3 - AVSS 2022 - 18th IEEE International Conference on Advanced Video and Signal-Based Surveillance
BT - AVSS 2022 - 18th IEEE International Conference on Advanced Video and Signal-Based Surveillance
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th IEEE International Conference on Advanced Video and Signal-Based Surveillance, AVSS 2022
Y2 - 29 November 2022 through 2 December 2022
ER -