A method is proposed for determining proper neural network structures. The method is a pruning method which removes insignificant connections from a large initial network to turn it into a network with less complexity but a better generalization ability. A new technique for evaluating net or non-superficial significance of each connection is proposed. Based on this significance measure, insignificant weights are deleted, and at the same time undeleted weights are modified to their relevant values without any additional learning. The significance measure is directly related to a measure of generalization ability; thus we can obtain a neural network with a good ability of generalization. The method is applicable to all those neural networks which have differentiable neurons and are trained by supervised learning, including multilayer feedforward and recurrent networks. The resultant network does not contain any redundant signals or any unnecessary weights. Therefore, it is easy to see how the network works. Several examples show the validity of the method.
|Number of pages||6|
|Publication status||Published - 1995|
All Science Journal Classification (ASJC) codes
- Electrical and Electronic Engineering