Relation between weight initialization of neural networks and pruning algorithms case study on Mackey-Glass time series

W. Wan, K. Hirasawa, J. Hu, Junichi Murata

Research output: Contribution to conferencePaper

4 Citations (Scopus)

Abstract

The implementation of weight initialization is directly related to the convergence of learning algorithms. In this paper we made a case study on the famous Mackey-Glass time series problem in order to try to find some relations between weight initialization of neural networks and pruning algorithms. The pruning algorithm used in simulations is Laplace regularizer method, that is, the backpropagation algorithm with Laplace regularizer added to the criterion function. Simulation results show that different kinds of initialization weight matrices display almost the same generalization ability when using the pruning algorithm, at least for the Mackey-Glass time series.

Original languageEnglish
Pages1750-1755
Number of pages6
Publication statusPublished - Jan 1 2001
EventInternational Joint Conference on Neural Networks (IJCNN'01) - Washington, DC, United States
Duration: Jul 15 2001Jul 19 2001

Other

OtherInternational Joint Conference on Neural Networks (IJCNN'01)
CountryUnited States
CityWashington, DC
Period7/15/017/19/01

Fingerprint

Time series
Neural networks
Glass
Backpropagation algorithms
Learning algorithms

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence

Cite this

Wan, W., Hirasawa, K., Hu, J., & Murata, J. (2001). Relation between weight initialization of neural networks and pruning algorithms case study on Mackey-Glass time series. 1750-1755. Paper presented at International Joint Conference on Neural Networks (IJCNN'01), Washington, DC, United States.

Relation between weight initialization of neural networks and pruning algorithms case study on Mackey-Glass time series. / Wan, W.; Hirasawa, K.; Hu, J.; Murata, Junichi.

2001. 1750-1755 Paper presented at International Joint Conference on Neural Networks (IJCNN'01), Washington, DC, United States.

Research output: Contribution to conferencePaper

Wan, W, Hirasawa, K, Hu, J & Murata, J 2001, 'Relation between weight initialization of neural networks and pruning algorithms case study on Mackey-Glass time series' Paper presented at International Joint Conference on Neural Networks (IJCNN'01), Washington, DC, United States, 7/15/01 - 7/19/01, pp. 1750-1755.
Wan W, Hirasawa K, Hu J, Murata J. Relation between weight initialization of neural networks and pruning algorithms case study on Mackey-Glass time series. 2001. Paper presented at International Joint Conference on Neural Networks (IJCNN'01), Washington, DC, United States.
Wan, W. ; Hirasawa, K. ; Hu, J. ; Murata, Junichi. / Relation between weight initialization of neural networks and pruning algorithms case study on Mackey-Glass time series. Paper presented at International Joint Conference on Neural Networks (IJCNN'01), Washington, DC, United States.6 p.
@conference{0d98144a71684c3e9006e1d598dada95,
title = "Relation between weight initialization of neural networks and pruning algorithms case study on Mackey-Glass time series",
abstract = "The implementation of weight initialization is directly related to the convergence of learning algorithms. In this paper we made a case study on the famous Mackey-Glass time series problem in order to try to find some relations between weight initialization of neural networks and pruning algorithms. The pruning algorithm used in simulations is Laplace regularizer method, that is, the backpropagation algorithm with Laplace regularizer added to the criterion function. Simulation results show that different kinds of initialization weight matrices display almost the same generalization ability when using the pruning algorithm, at least for the Mackey-Glass time series.",
author = "W. Wan and K. Hirasawa and J. Hu and Junichi Murata",
year = "2001",
month = "1",
day = "1",
language = "English",
pages = "1750--1755",
note = "International Joint Conference on Neural Networks (IJCNN'01) ; Conference date: 15-07-2001 Through 19-07-2001",

}

TY - CONF

T1 - Relation between weight initialization of neural networks and pruning algorithms case study on Mackey-Glass time series

AU - Wan, W.

AU - Hirasawa, K.

AU - Hu, J.

AU - Murata, Junichi

PY - 2001/1/1

Y1 - 2001/1/1

N2 - The implementation of weight initialization is directly related to the convergence of learning algorithms. In this paper we made a case study on the famous Mackey-Glass time series problem in order to try to find some relations between weight initialization of neural networks and pruning algorithms. The pruning algorithm used in simulations is Laplace regularizer method, that is, the backpropagation algorithm with Laplace regularizer added to the criterion function. Simulation results show that different kinds of initialization weight matrices display almost the same generalization ability when using the pruning algorithm, at least for the Mackey-Glass time series.

AB - The implementation of weight initialization is directly related to the convergence of learning algorithms. In this paper we made a case study on the famous Mackey-Glass time series problem in order to try to find some relations between weight initialization of neural networks and pruning algorithms. The pruning algorithm used in simulations is Laplace regularizer method, that is, the backpropagation algorithm with Laplace regularizer added to the criterion function. Simulation results show that different kinds of initialization weight matrices display almost the same generalization ability when using the pruning algorithm, at least for the Mackey-Glass time series.

UR - http://www.scopus.com/inward/record.url?scp=0034876346&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0034876346&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:0034876346

SP - 1750

EP - 1755

ER -