Taming decentralized POMDPs: Towards efficient policy computation for multiagent settings

R. Nair, M. Tambe, M. Yokoo, D. Pynadath, S. Marsella

研究成果: ジャーナルへの寄稿会議記事査読

257 被引用数 (Scopus)

抄録

The problem of deriving joint policies for a group of agents that maximize some joint reward function can be modeled as a decentralized partially observable Markov decision process (POMDP). Yet, despite the growing importance and applications of decentralized POMDP models in the multiagents arena, few algorithms have been developed for efficiently deriving joint policies for these models. This paper presents a new class of locally optimal algorithms called "Joint Equilibrium-based search for policies (JESP)". We first describe an exhaustive version of JESP and subsequently a novel dynamic programming approach to JESP. Our complexity analysis reveals the potential for exponential speedups due to the dynamic programming approach. These theoretical results are verified via empirical comparisons of the two JESP versions with each other and with a globally optimal brute-force search algorithm. Finally, we prove piece-wise linear and convexity (PWLC) properties, thus taking steps towards developing algorithms for continuous belief states.

本文言語英語
ページ(範囲)705-711
ページ数7
ジャーナルIJCAI International Joint Conference on Artificial Intelligence
出版ステータス出版済み - 12月 1 2003
外部発表はい
イベント18th International Joint Conference on Artificial Intelligence, IJCAI 2003 - Acapulco, メキシコ
継続期間: 8月 9 20038月 15 2003

!!!All Science Journal Classification (ASJC) codes

  • 人工知能

フィンガープリント

「Taming decentralized POMDPs: Towards efficient policy computation for multiagent settings」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル