Function approximation using LVQ and fuzzy sets

Shon Min-Kyu, Junichi Murata, Kotaro Hirasawa

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Neural networks with local activation functions, for example RBFNs (Radial Basis Function Networks), have a merit of excellent generalization abilities. When this type of network is used in function approximation, it is very important to determine the proper division of the input space into local regions to each of which a local activation function is assigned. In RBFNs, this is equivalent to determination of the locations and the numbers of its RBFs, which is generally done based on the distribution of input data. But, in function approximation, the output information (the value of the function to be approximated) must be considered in determination of the local regions. A new method is proposed that uses LVQ network to approximate the functions based on the output information. It divides the input space into regions with a prototype vector at the center of each region. The ordinary LVQ, however, outputs discrete values only, and therefore can not approximate continuous functions. In this paper, fuzzy sets are employed in both of learning and output calculation. Finally, the proposed method uses the back-propagation algorithm for fine adjustment. An example is provided to show the effectiveness of the proposed method.

Original languageEnglish
Pages (from-to)1442-1447
Number of pages6
JournalUnknown Journal
Volume3
Publication statusPublished - 2001

Fingerprint

Fuzzy sets
Radial basis function networks
back propagation
Chemical activation
learning
Backpropagation algorithms
method
Neural networks

All Science Journal Classification (ASJC) codes

  • Hardware and Architecture
  • Control and Systems Engineering

Cite this

Min-Kyu, S., Murata, J., & Hirasawa, K. (2001). Function approximation using LVQ and fuzzy sets. Unknown Journal, 3, 1442-1447.

Function approximation using LVQ and fuzzy sets. / Min-Kyu, Shon; Murata, Junichi; Hirasawa, Kotaro.

In: Unknown Journal, Vol. 3, 2001, p. 1442-1447.

Research output: Contribution to journalArticle

Min-Kyu, S, Murata, J & Hirasawa, K 2001, 'Function approximation using LVQ and fuzzy sets', Unknown Journal, vol. 3, pp. 1442-1447.
Min-Kyu, Shon ; Murata, Junichi ; Hirasawa, Kotaro. / Function approximation using LVQ and fuzzy sets. In: Unknown Journal. 2001 ; Vol. 3. pp. 1442-1447.
@article{6cd70f4f3a834f80af969373f3648c5e,
title = "Function approximation using LVQ and fuzzy sets",
abstract = "Neural networks with local activation functions, for example RBFNs (Radial Basis Function Networks), have a merit of excellent generalization abilities. When this type of network is used in function approximation, it is very important to determine the proper division of the input space into local regions to each of which a local activation function is assigned. In RBFNs, this is equivalent to determination of the locations and the numbers of its RBFs, which is generally done based on the distribution of input data. But, in function approximation, the output information (the value of the function to be approximated) must be considered in determination of the local regions. A new method is proposed that uses LVQ network to approximate the functions based on the output information. It divides the input space into regions with a prototype vector at the center of each region. The ordinary LVQ, however, outputs discrete values only, and therefore can not approximate continuous functions. In this paper, fuzzy sets are employed in both of learning and output calculation. Finally, the proposed method uses the back-propagation algorithm for fine adjustment. An example is provided to show the effectiveness of the proposed method.",
author = "Shon Min-Kyu and Junichi Murata and Kotaro Hirasawa",
year = "2001",
language = "English",
volume = "3",
pages = "1442--1447",
journal = "Quaternary International",
issn = "1040-6182",
publisher = "Elsevier Limited",

}

TY - JOUR

T1 - Function approximation using LVQ and fuzzy sets

AU - Min-Kyu, Shon

AU - Murata, Junichi

AU - Hirasawa, Kotaro

PY - 2001

Y1 - 2001

N2 - Neural networks with local activation functions, for example RBFNs (Radial Basis Function Networks), have a merit of excellent generalization abilities. When this type of network is used in function approximation, it is very important to determine the proper division of the input space into local regions to each of which a local activation function is assigned. In RBFNs, this is equivalent to determination of the locations and the numbers of its RBFs, which is generally done based on the distribution of input data. But, in function approximation, the output information (the value of the function to be approximated) must be considered in determination of the local regions. A new method is proposed that uses LVQ network to approximate the functions based on the output information. It divides the input space into regions with a prototype vector at the center of each region. The ordinary LVQ, however, outputs discrete values only, and therefore can not approximate continuous functions. In this paper, fuzzy sets are employed in both of learning and output calculation. Finally, the proposed method uses the back-propagation algorithm for fine adjustment. An example is provided to show the effectiveness of the proposed method.

AB - Neural networks with local activation functions, for example RBFNs (Radial Basis Function Networks), have a merit of excellent generalization abilities. When this type of network is used in function approximation, it is very important to determine the proper division of the input space into local regions to each of which a local activation function is assigned. In RBFNs, this is equivalent to determination of the locations and the numbers of its RBFs, which is generally done based on the distribution of input data. But, in function approximation, the output information (the value of the function to be approximated) must be considered in determination of the local regions. A new method is proposed that uses LVQ network to approximate the functions based on the output information. It divides the input space into regions with a prototype vector at the center of each region. The ordinary LVQ, however, outputs discrete values only, and therefore can not approximate continuous functions. In this paper, fuzzy sets are employed in both of learning and output calculation. Finally, the proposed method uses the back-propagation algorithm for fine adjustment. An example is provided to show the effectiveness of the proposed method.

UR - http://www.scopus.com/inward/record.url?scp=0035722008&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0035722008&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:0035722008

VL - 3

SP - 1442

EP - 1447

JO - Quaternary International

JF - Quaternary International

SN - 1040-6182

ER -