Believing others: Pros and cons

Shigeo Matsubara, Makoto Yokoo

Research output: Contribution to journalArticlepeer-review

30 Citations (Scopus)

Abstract

In open environments there is no central control over agent behaviors. On the contrary, agents in such systems can be assumed to be primarily driven by self interests. Under the assumption that agents remain in the system for significant time periods, or that the agent composition changes only slowly, we have previously presented a prescriptive strategy for promoting and sustaining cooperation among self-interested agents. The adaptive, probabilistic policy we have prescribed promotes reciprocative cooperation that improves both individual and group performance in the long run. In the short run, however, selfish agents could still exploit reciprocative agents. In this paper, we evaluate the hypothesis that the exploitative tendencies of selfish agents can be effectively curbed if reciprocative agents share their "opinions" of other agents. Since the true nature of agents is not known a priori and is learned from experience, believing others can also pose its own hazards. We provide a learned trust-based evaluation function that is shown to resist both individual and concerted deception on the part of selfish agents in a package delivery domain.

Original languageEnglish
Pages (from-to)179-203
Number of pages25
JournalArtificial Intelligence
Volume142
Issue number2
DOIs
Publication statusPublished - Dec 2002
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Linguistics and Language
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Believing others: Pros and cons'. Together they form a unique fingerprint.

Cite this