Evaluation of nonlinear optimization methods for the learning algorithm of artificial neural networks

Hideyuki Takagi, Shigeo Sakaue, Hayato Togawa

Research output: Contribution to journalArticle

Abstract

This paper describes the implementation of nonlinear optimization methods into the learning of neural networks (NN) and the speed efficiency of four proposed improvements into the backpropagation algorithm. The problems of the backpropagation learning method are pointed out first, and the efficiency of implementing a nonlinear optimization method as a solution to this problems is described. Two nonlinear optimization methods are selected after inspecting several nonlinear methods from the viewpoint of NN learning to avoid the problem of the backpropagation algorithm. These are the linear search method by Davies, Swann, and Campey (DSC), and the conjugate gradient method by Fletcher and Reeves. The NN learning algorithms with these standard methods being implemented are formulated. Moreover, the following four improvements of the nonlinear optimization methods are proposed to shorten the NN learning time: (a) fast forward calculation in linear search by consuming a larger amount of memories; (b) avoiding the trap to local minimum point in an early stage of linear search; (c) applying a linear search method suitable for parallel processing; and (d) switching the gradient direction using the conjugate gradient method. The evaluation results have shown that all methods described here are effective in shortening the learning time.

Original languageEnglish
Pages (from-to)101-111
Number of pages11
JournalSystems and Computers in Japan
Volume23
Issue number1
DOIs
Publication statusPublished - Jan 1 1992
Externally publishedYes

Fingerprint

Linear search
Nonlinear Optimization
Learning algorithms
Artificial Neural Network
Optimization Methods
Learning Algorithm
Neural networks
Conjugate gradient method
Backpropagation algorithms
Neural Networks
Evaluation
Back-propagation Algorithm
Conjugate Gradient Method
Search Methods
Backpropagation
Back Propagation
Network Algorithms
Parallel Processing
Trap
Local Minima

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Information Systems
  • Hardware and Architecture
  • Computational Theory and Mathematics

Cite this

Evaluation of nonlinear optimization methods for the learning algorithm of artificial neural networks. / Takagi, Hideyuki; Sakaue, Shigeo; Togawa, Hayato.

In: Systems and Computers in Japan, Vol. 23, No. 1, 01.01.1992, p. 101-111.

Research output: Contribution to journalArticle

@article{8d4f0a42e08449ee8e4e1538b88520ce,
title = "Evaluation of nonlinear optimization methods for the learning algorithm of artificial neural networks",
abstract = "This paper describes the implementation of nonlinear optimization methods into the learning of neural networks (NN) and the speed efficiency of four proposed improvements into the backpropagation algorithm. The problems of the backpropagation learning method are pointed out first, and the efficiency of implementing a nonlinear optimization method as a solution to this problems is described. Two nonlinear optimization methods are selected after inspecting several nonlinear methods from the viewpoint of NN learning to avoid the problem of the backpropagation algorithm. These are the linear search method by Davies, Swann, and Campey (DSC), and the conjugate gradient method by Fletcher and Reeves. The NN learning algorithms with these standard methods being implemented are formulated. Moreover, the following four improvements of the nonlinear optimization methods are proposed to shorten the NN learning time: (a) fast forward calculation in linear search by consuming a larger amount of memories; (b) avoiding the trap to local minimum point in an early stage of linear search; (c) applying a linear search method suitable for parallel processing; and (d) switching the gradient direction using the conjugate gradient method. The evaluation results have shown that all methods described here are effective in shortening the learning time.",
author = "Hideyuki Takagi and Shigeo Sakaue and Hayato Togawa",
year = "1992",
month = "1",
day = "1",
doi = "10.1002/scj.4690230109",
language = "English",
volume = "23",
pages = "101--111",
journal = "Systems and Computers in Japan",
issn = "0882-1666",
publisher = "John Wiley and Sons Inc.",
number = "1",

}

TY - JOUR

T1 - Evaluation of nonlinear optimization methods for the learning algorithm of artificial neural networks

AU - Takagi, Hideyuki

AU - Sakaue, Shigeo

AU - Togawa, Hayato

PY - 1992/1/1

Y1 - 1992/1/1

N2 - This paper describes the implementation of nonlinear optimization methods into the learning of neural networks (NN) and the speed efficiency of four proposed improvements into the backpropagation algorithm. The problems of the backpropagation learning method are pointed out first, and the efficiency of implementing a nonlinear optimization method as a solution to this problems is described. Two nonlinear optimization methods are selected after inspecting several nonlinear methods from the viewpoint of NN learning to avoid the problem of the backpropagation algorithm. These are the linear search method by Davies, Swann, and Campey (DSC), and the conjugate gradient method by Fletcher and Reeves. The NN learning algorithms with these standard methods being implemented are formulated. Moreover, the following four improvements of the nonlinear optimization methods are proposed to shorten the NN learning time: (a) fast forward calculation in linear search by consuming a larger amount of memories; (b) avoiding the trap to local minimum point in an early stage of linear search; (c) applying a linear search method suitable for parallel processing; and (d) switching the gradient direction using the conjugate gradient method. The evaluation results have shown that all methods described here are effective in shortening the learning time.

AB - This paper describes the implementation of nonlinear optimization methods into the learning of neural networks (NN) and the speed efficiency of four proposed improvements into the backpropagation algorithm. The problems of the backpropagation learning method are pointed out first, and the efficiency of implementing a nonlinear optimization method as a solution to this problems is described. Two nonlinear optimization methods are selected after inspecting several nonlinear methods from the viewpoint of NN learning to avoid the problem of the backpropagation algorithm. These are the linear search method by Davies, Swann, and Campey (DSC), and the conjugate gradient method by Fletcher and Reeves. The NN learning algorithms with these standard methods being implemented are formulated. Moreover, the following four improvements of the nonlinear optimization methods are proposed to shorten the NN learning time: (a) fast forward calculation in linear search by consuming a larger amount of memories; (b) avoiding the trap to local minimum point in an early stage of linear search; (c) applying a linear search method suitable for parallel processing; and (d) switching the gradient direction using the conjugate gradient method. The evaluation results have shown that all methods described here are effective in shortening the learning time.

UR - http://www.scopus.com/inward/record.url?scp=0026711847&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0026711847&partnerID=8YFLogxK

U2 - 10.1002/scj.4690230109

DO - 10.1002/scj.4690230109

M3 - Article

AN - SCOPUS:0026711847

VL - 23

SP - 101

EP - 111

JO - Systems and Computers in Japan

JF - Systems and Computers in Japan

SN - 0882-1666

IS - 1

ER -