In this paper, we propose a new heuristic training procedure to help a deep neural network (DNN) repeatedly escape from a local minimum and move to a better local minimum. Our method repeats the following processes multiple times: Randomly reinitializing the weights of the last layer of a converged DNN while preserving the weights of the remaining layers, and then conducting a new round of training. The motivation is to make the training in the new round learn better parameters based on the 'good' initial parameters learned in the previous round. With multiple randomly initialized DNNs trained based on our training procedure, we can obtain an ensemble of DNNs that are more accurate and diverse compared with the normal training procedure. We call this framework 'retraining'. Experiments on eight DNN models show that our method generally outperforms the state-of-the-art ensemble learning methods. We also provide two variants of the retraining framework to tackle the tasks of ensemble learning in which 1) DNNs exhibit very high training accuracies (e.g., > 95%) and 2) DNNs are too computationally expensive to train.