TY - GEN
T1 - Distributed multi-objective GA for generating comprehensive Pareto front in deceptive optimization problems
AU - Ando, Shin
AU - Suzuki, Einoshin
PY - 2006/12/1
Y1 - 2006/12/1
N2 - This paper discusses a structure of multi-objective optimization problems, which cause deception for conventional Multi-Objective Genetic Algorithms (MOGAs). Further, we propose a Distributed Multi-Objective Genetic Algorithm (DMOGA), which employs a multiple subpopulation implementation and a replacement scheme based on the information theoretic entropy, to improve the performance of MOGA in such deceptive problems. Several studies have reported that the conventional MOGAs' have difficulties in generating marginal segments of the Pareto front in a combinatorial optimization problems, though structural causes of their behaviors have not yet been thoroughly studied. Our analysis of the conventional MOGAs' behaviors in two test deceptive problems suggests that the use of the local density in the selection causes an implicit bias which results in a premature convergence. DMOGA is a distributed implementation of MOGA, which emphasizes the diversity of the subpopulations by the entropy of the objective functions. This approach alleviates the premature convergence and enables MOGA to effectively generate Pareto fronts for complex objective functions. In a set of simulated experiments, the proposed method generated more comprehensive Pareto fronts than the conventional MOGAs, i.e., NSGA-II and SPEA2 in the deceptive test functions, and also achieved comparable performance in the standard multi-objective benchmarks.
AB - This paper discusses a structure of multi-objective optimization problems, which cause deception for conventional Multi-Objective Genetic Algorithms (MOGAs). Further, we propose a Distributed Multi-Objective Genetic Algorithm (DMOGA), which employs a multiple subpopulation implementation and a replacement scheme based on the information theoretic entropy, to improve the performance of MOGA in such deceptive problems. Several studies have reported that the conventional MOGAs' have difficulties in generating marginal segments of the Pareto front in a combinatorial optimization problems, though structural causes of their behaviors have not yet been thoroughly studied. Our analysis of the conventional MOGAs' behaviors in two test deceptive problems suggests that the use of the local density in the selection causes an implicit bias which results in a premature convergence. DMOGA is a distributed implementation of MOGA, which emphasizes the diversity of the subpopulations by the entropy of the objective functions. This approach alleviates the premature convergence and enables MOGA to effectively generate Pareto fronts for complex objective functions. In a set of simulated experiments, the proposed method generated more comprehensive Pareto fronts than the conventional MOGAs, i.e., NSGA-II and SPEA2 in the deceptive test functions, and also achieved comparable performance in the standard multi-objective benchmarks.
UR - http://www.scopus.com/inward/record.url?scp=34547278592&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=34547278592&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:34547278592
SN - 0780394879
SN - 9780780394872
T3 - 2006 IEEE Congress on Evolutionary Computation, CEC 2006
SP - 1569
EP - 1576
BT - 2006 IEEE Congress on Evolutionary Computation, CEC 2006
T2 - 2006 IEEE Congress on Evolutionary Computation, CEC 2006
Y2 - 16 July 2006 through 21 July 2006
ER -