TY - GEN
T1 - Flexible reward plans for crowdsourced tasks
AU - Sakurai, Yuko
AU - Shinoda, Masato
AU - Oyama, Satoshi
AU - Yokoo, Makoto
N1 - Publisher Copyright:
© Springer International Publishing Switzerland 2015.
PY - 2015
Y1 - 2015
N2 - We develop flexible reward plans to elicit truthful predictive probability distribution over a set of uncertain events from workers. In general, strictly proper scoring rules for categorical events only reward a worker for an event that actually occurred. However, different incorrect predictions vary in quality, and the principal would like to assign different rewards to them, according to her subjective similarity among events; e.g. a prediction of overcast is closer to sunny than rainy. We propose concrete methods so that the principal can assign rewards for incorrect predictions according to her similarity between events. We focus on two representative examples of strictly proper scoring rules: spherical and quadratic, where a worker’s expected utility is represented as the inner product of her truthful predictive probability and her declared probability. In this paper, we generalize the inner product by introducing a reward matrix that defines a reward for each prediction outcome pair. We first show that if the reward matrix is symmetric and positive definite, both the spherical and quadratic proper scoring rules guarantee the maximization of a worker’s expected utility when she truthfully declares her prediction. We next compare our rules with the original spherical/quadratic proper scoring rules in terms of the variance of rewards obtained by workers. Finally, we show our experimental results using Amazon Mechanical Turk.
AB - We develop flexible reward plans to elicit truthful predictive probability distribution over a set of uncertain events from workers. In general, strictly proper scoring rules for categorical events only reward a worker for an event that actually occurred. However, different incorrect predictions vary in quality, and the principal would like to assign different rewards to them, according to her subjective similarity among events; e.g. a prediction of overcast is closer to sunny than rainy. We propose concrete methods so that the principal can assign rewards for incorrect predictions according to her similarity between events. We focus on two representative examples of strictly proper scoring rules: spherical and quadratic, where a worker’s expected utility is represented as the inner product of her truthful predictive probability and her declared probability. In this paper, we generalize the inner product by introducing a reward matrix that defines a reward for each prediction outcome pair. We first show that if the reward matrix is symmetric and positive definite, both the spherical and quadratic proper scoring rules guarantee the maximization of a worker’s expected utility when she truthfully declares her prediction. We next compare our rules with the original spherical/quadratic proper scoring rules in terms of the variance of rewards obtained by workers. Finally, we show our experimental results using Amazon Mechanical Turk.
UR - http://www.scopus.com/inward/record.url?scp=84950336549&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84950336549&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-25524-8_25
DO - 10.1007/978-3-319-25524-8_25
M3 - Conference contribution
AN - SCOPUS:84950336549
SN - 9783319255231
SN - 9783319255231
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 400
EP - 415
BT - PRIMA 2015
A2 - Torroni, Paolo
A2 - Omicini, Andrea
A2 - Hsu, Jane
A2 - Chen, Qingliang
A2 - Torroni, Paolo
A2 - Omicini, Andrea
A2 - Hsu, Jane
A2 - Chen, Qingliang
A2 - Villata, Serena
A2 - Villata, Serena
PB - Springer Verlag
T2 - 18th International Conference on Principles and Practice of Multi-Agent Systems, PRIMA 2015
Y2 - 26 October 2015 through 30 October 2015
ER -