TY - GEN
T1 - Fast competition approach using self organizing map for lip-reading applications
AU - Sagheer, Alaa
AU - Tsuruta, Nayouki
AU - Taniguchi, Rin Ichiro
AU - Arita, Daisaku
AU - Maeda, Sakashi
N1 - Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2006
Y1 - 2006
N2 - The Self Organizing Map, or Kohonen SOM, is one of the most widely used neural network paradigm based on unsupervised competitive learning. However, the search algorithm introduced by Kohonen is slow when the size of the map is large. This slowness is caused by seeking about the best matching neuron among "all" the map neurons which tuned to "each" input sample. In this paper, we present a new strategy capable to accelerate the SOM's competition algorithm. Instead of Kohonen SOM strategy, the new approach concerns "only" with the neurons which are aligned along the low-order principal components of the feature space and neglects the rest of neurons. The idea is based on the fact that most of the data variance lie on the low-order principal components of the manifold which often contain the most important features of the data [1] [6]. The new SOM can works effectively as a feature extractor for all kinds of manifolds even in the curved ones. Two data sets are utilized to illustrate how the proposed algorithm reduces the computation efforts (or time) of SOM effectively. For N-dimensions feature space, it is shown here that the computation effort to get the best matching units is reduced to O(D1+ D2+...+ DN) instead of O(D 1 × D2 × ... × DN), where Di is the number of neurons through the dimension i. Also, under same experimental conditions, our method computation time is less than that of fast DCT by sixth times. In all cases, the new SOM shows, at least, same recognition accuracy or may be better.
AB - The Self Organizing Map, or Kohonen SOM, is one of the most widely used neural network paradigm based on unsupervised competitive learning. However, the search algorithm introduced by Kohonen is slow when the size of the map is large. This slowness is caused by seeking about the best matching neuron among "all" the map neurons which tuned to "each" input sample. In this paper, we present a new strategy capable to accelerate the SOM's competition algorithm. Instead of Kohonen SOM strategy, the new approach concerns "only" with the neurons which are aligned along the low-order principal components of the feature space and neglects the rest of neurons. The idea is based on the fact that most of the data variance lie on the low-order principal components of the manifold which often contain the most important features of the data [1] [6]. The new SOM can works effectively as a feature extractor for all kinds of manifolds even in the curved ones. Two data sets are utilized to illustrate how the proposed algorithm reduces the computation efforts (or time) of SOM effectively. For N-dimensions feature space, it is shown here that the computation effort to get the best matching units is reduced to O(D1+ D2+...+ DN) instead of O(D 1 × D2 × ... × DN), where Di is the number of neurons through the dimension i. Also, under same experimental conditions, our method computation time is less than that of fast DCT by sixth times. In all cases, the new SOM shows, at least, same recognition accuracy or may be better.
UR - http://www.scopus.com/inward/record.url?scp=40649122802&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=40649122802&partnerID=8YFLogxK
U2 - 10.1109/ijcnn.2006.247396
DO - 10.1109/ijcnn.2006.247396
M3 - Conference contribution
AN - SCOPUS:40649122802
SN - 0780394909
SN - 9780780394902
T3 - IEEE International Conference on Neural Networks - Conference Proceedings
SP - 3775
EP - 3782
BT - International Joint Conference on Neural Networks 2006, IJCNN '06
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - International Joint Conference on Neural Networks 2006, IJCNN '06
Y2 - 16 July 2006 through 21 July 2006
ER -