TY - JOUR
T1 - Believing others
T2 - Pros and cons
AU - Matsubara, Shigeo
AU - Yokoo, Makoto
N1 - Funding Information:
This work has been supported in part by an NSF CAREER award IIS-9702672. We would like to acknowledge the programming efforts of Anish Biswas and Sandip Debnath.
PY - 2002/12
Y1 - 2002/12
N2 - In open environments there is no central control over agent behaviors. On the contrary, agents in such systems can be assumed to be primarily driven by self interests. Under the assumption that agents remain in the system for significant time periods, or that the agent composition changes only slowly, we have previously presented a prescriptive strategy for promoting and sustaining cooperation among self-interested agents. The adaptive, probabilistic policy we have prescribed promotes reciprocative cooperation that improves both individual and group performance in the long run. In the short run, however, selfish agents could still exploit reciprocative agents. In this paper, we evaluate the hypothesis that the exploitative tendencies of selfish agents can be effectively curbed if reciprocative agents share their "opinions" of other agents. Since the true nature of agents is not known a priori and is learned from experience, believing others can also pose its own hazards. We provide a learned trust-based evaluation function that is shown to resist both individual and concerted deception on the part of selfish agents in a package delivery domain.
AB - In open environments there is no central control over agent behaviors. On the contrary, agents in such systems can be assumed to be primarily driven by self interests. Under the assumption that agents remain in the system for significant time periods, or that the agent composition changes only slowly, we have previously presented a prescriptive strategy for promoting and sustaining cooperation among self-interested agents. The adaptive, probabilistic policy we have prescribed promotes reciprocative cooperation that improves both individual and group performance in the long run. In the short run, however, selfish agents could still exploit reciprocative agents. In this paper, we evaluate the hypothesis that the exploitative tendencies of selfish agents can be effectively curbed if reciprocative agents share their "opinions" of other agents. Since the true nature of agents is not known a priori and is learned from experience, believing others can also pose its own hazards. We provide a learned trust-based evaluation function that is shown to resist both individual and concerted deception on the part of selfish agents in a package delivery domain.
UR - http://www.scopus.com/inward/record.url?scp=0036888596&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0036888596&partnerID=8YFLogxK
U2 - 10.1016/S0004-3702(02)00289-8
DO - 10.1016/S0004-3702(02)00289-8
M3 - Article
AN - SCOPUS:0036888596
SN - 0004-3702
VL - 142
SP - 179
EP - 203
JO - Artificial Intelligence
JF - Artificial Intelligence
IS - 2
ER -