Structure designing method for feedforward and recurrent neural networks based on a net significant measure

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

A method is proposed for determining proper neural network structures. The method is a pruning method which removes insignificant connections from a large initial network to turn it into a network with less complexity but a better generalization ability. A new technique for evaluating net or non-superficial significance of each connection is proposed. Based on this significance measure, insignificant weights are deleted, and at the same time undeleted weights are modified to their relevant values without any additional learning. The significance measure is directly related to a measure of generalization ability; thus we can obtain a neural network with a good ability of generalization. The method is applicable to all those neural networks which have differentiable neurons and are trained by supervised learning, including multilayer feedforward and recurrent networks. The resultant network does not contain any redundant signals or any unnecessary weights. Therefore, it is easy to see how the network works. Several examples show the validity of the method.

Original languageEnglish
Pages (from-to)358-363
Number of pages6
JournalUnknown Journal
Issue number409
Publication statusPublished - 1995

Fingerprint

Recurrent neural networks
Feedforward neural networks
Neural networks
Supervised learning
Neurons
Multilayers
pruning
learning
method

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Cite this

@article{91b7cb1b847240709f7a665f0f7b2b8a,
title = "Structure designing method for feedforward and recurrent neural networks based on a net significant measure",
abstract = "A method is proposed for determining proper neural network structures. The method is a pruning method which removes insignificant connections from a large initial network to turn it into a network with less complexity but a better generalization ability. A new technique for evaluating net or non-superficial significance of each connection is proposed. Based on this significance measure, insignificant weights are deleted, and at the same time undeleted weights are modified to their relevant values without any additional learning. The significance measure is directly related to a measure of generalization ability; thus we can obtain a neural network with a good ability of generalization. The method is applicable to all those neural networks which have differentiable neurons and are trained by supervised learning, including multilayer feedforward and recurrent networks. The resultant network does not contain any redundant signals or any unnecessary weights. Therefore, it is easy to see how the network works. Several examples show the validity of the method.",
author = "Junichi Murata",
year = "1995",
language = "English",
pages = "358--363",
journal = "Quaternary International",
issn = "1040-6182",
publisher = "Elsevier Limited",
number = "409",

}

TY - JOUR

T1 - Structure designing method for feedforward and recurrent neural networks based on a net significant measure

AU - Murata, Junichi

PY - 1995

Y1 - 1995

N2 - A method is proposed for determining proper neural network structures. The method is a pruning method which removes insignificant connections from a large initial network to turn it into a network with less complexity but a better generalization ability. A new technique for evaluating net or non-superficial significance of each connection is proposed. Based on this significance measure, insignificant weights are deleted, and at the same time undeleted weights are modified to their relevant values without any additional learning. The significance measure is directly related to a measure of generalization ability; thus we can obtain a neural network with a good ability of generalization. The method is applicable to all those neural networks which have differentiable neurons and are trained by supervised learning, including multilayer feedforward and recurrent networks. The resultant network does not contain any redundant signals or any unnecessary weights. Therefore, it is easy to see how the network works. Several examples show the validity of the method.

AB - A method is proposed for determining proper neural network structures. The method is a pruning method which removes insignificant connections from a large initial network to turn it into a network with less complexity but a better generalization ability. A new technique for evaluating net or non-superficial significance of each connection is proposed. Based on this significance measure, insignificant weights are deleted, and at the same time undeleted weights are modified to their relevant values without any additional learning. The significance measure is directly related to a measure of generalization ability; thus we can obtain a neural network with a good ability of generalization. The method is applicable to all those neural networks which have differentiable neurons and are trained by supervised learning, including multilayer feedforward and recurrent networks. The resultant network does not contain any redundant signals or any unnecessary weights. Therefore, it is easy to see how the network works. Several examples show the validity of the method.

UR - http://www.scopus.com/inward/record.url?scp=0029211026&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0029211026&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:0029211026

SP - 358

EP - 363

JO - Quaternary International

JF - Quaternary International

SN - 1040-6182

IS - 409

ER -