Leximin multiple objective optimization for preferences of agents

Toshihiro Matsui, Marius Silaghi, Katsutoshi Hirayama, Makoto Yokoo, Hiroshi Matsuo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

We address a variation of Multiple Objective Distributed Constraint Optimization Problems (MODCOPs). In the conventional MODCOPs, a few objectives are globally defined and agents cooperate to find the Pareto optimal solution. On the other hand, in several practical problems, the share of each agent is important. Such shares are represented as preference values of agents. This class of problems is defined as theMODCOP on the preferences of agents. Particularly, we focus on the optimization problems based on the leximin ordering (Leximin AMODCOPs), which improves the equality among agents. The solution methods based on pseudo trees are applied to the Leximin AMODCOPs.

Original languageEnglish
Title of host publicationPRIMA 2014
Subtitle of host publicationPrinciples and Practice of Multi-Agent Systems - 17th International Conference, Proceedings
EditorsHoa Khanh Dam, Jeremy Pitt, Yang Xu, Guido Governatori, Takayuki Ito
PublisherSpringer Verlag
Pages423-438
Number of pages16
ISBN (Electronic)9783319131900
DOIs
Publication statusPublished - 2014
Event17th International Conference on Principles and Practice of Multi-Agent Systems, PRIMA 2014 - Gold Coast, Australia
Duration: Dec 1 2014Dec 5 2014

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume8861
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other17th International Conference on Principles and Practice of Multi-Agent Systems, PRIMA 2014
Country/TerritoryAustralia
CityGold Coast
Period12/1/1412/5/14

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Leximin multiple objective optimization for preferences of agents'. Together they form a unique fingerprint.

Cite this