Recently, deep neural networks (DNNs) have demonstrated excellent performance for change detection. The DNN-based background subtraction automatically discovers background features from datasets and outperforms traditional background modeling based on handcraft features and/or subtraction strategies. Most researchers mainly discuss the accuracy of foreground detection and do not analyze how and why the DNN works well for change detection tasks. It is necessary to understand what the DNN learns as background features in order to discuss the potential of the DNN in background subtraction. In this paper, we focus on the filters in the first convolution layer and the activations of neurons in the last fully connected layer to understand the behavior of the DNN. From the experiment, we found that 1) the first layer performs the role of background subtraction using several filters, and 2) the last layer categorizes some background changes into a group without supervised signals. These findings suggest the possibility of a new background modeling strategy based on data-driven extracted features.