LYAPUNOV THEORY BASED ADAPTIVE LEARNING ALGORITHM FOR MULTILAYER NEURAL NETWORKS


Creative Commons License

Acir N., Menguc E. C.

NEURAL NETWORK WORLD, cilt.24, sa.6, ss.619-636, 2014 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 24 Sayı: 6
  • Basım Tarihi: 2014
  • Doi Numarası: 10.14311/nnw.2014.24.035
  • Dergi Adı: NEURAL NETWORK WORLD
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Sayfa Sayıları: ss.619-636
  • Anahtar Kelimeler: Lyapunov stability theory, multilayer neural network, Lagrange multiplier theory, adaptive learning
  • Kayseri Üniversitesi Adresli: Hayır

Özet

This paper presents a novel weight updating algorithm for training of multilayer neural network (MLNN). The MLNN system is first linearized and then the design procedure is proposed as an inequality constraint optimization problem. A well selected Lyapunov function is suitably determined and integrated into the constraint function for satisfying asymptotic stability in the sense of Lyapunov. Thus, the convergence capability of training algorithm is improved by using a new analytical adaptation gain rate which has the ability to adaptively adjust itself depending on a sequential square error rate. The proposed algorithm is compared with two types of backpropagation algorithms and a Lyapunov theory based MLNN algorithm on three benchmark problems which are XOR, 3-bit parity, and 8-3 encoder. The results are compared in terms of number of learning iterations and computational time required for a specified convergence rate. The results clearly indicate that the proposed algorithm is much faster in convergence than other three algorithms. The proposed algorithm is also comparatively tested on a real iris image database for multiple-input and multiple-output classification problem and the effect of adaptation gain rate for faster convergence and higher performance is verified.