Mathematical programming formulations for neural combinatorial optimization algorithms

Kiichi Urahama

    Research output: Contribution to journalArticlepeer-review

    4 Citations (Scopus)

    Abstract

    Analog neural network algorithms for solving combinatorial optimization problems are analyzed on the basis of the saddle point theorem in mathematical programming theories. A generalized Hopfield network scheme is shown to be a gradient system for searching a saddle point of Lagrange functions of continuous relaxation problems of 0-1 integer programs. This derivation of neural network algorithms provides us an interpretation of deterministic annealing procedures on the basis of mathematical programming theories. Additional interpretation of the generalized Hopfield network scheme is also given as a gradient projection method in a Riemannian space. The Lagrange function is shown to be a Liapunov function for this gradient algorithm whose convergence is then guaranteed from Liapunov stability theorems.

    Original languageEnglish
    Pages (from-to)353-364
    Number of pages12
    JournalJournal of artificial neural networks
    Volume2
    Issue number4
    Publication statusPublished - Dec 1 1995

    All Science Journal Classification (ASJC) codes

    • Engineering(all)

    Fingerprint

    Dive into the research topics of 'Mathematical programming formulations for neural combinatorial optimization algorithms'. Together they form a unique fingerprint.

    Cite this