Flexible reward plans for crowdsourced tasks

Yuko Sakurai, Masato Shinoda, Satoshi Oyama, Makoto Yokoo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

We develop flexible reward plans to elicit truthful predictive probability distribution over a set of uncertain events from workers. In general, strictly proper scoring rules for categorical events only reward a worker for an event that actually occurred. However, different incorrect predictions vary in quality, and the principal would like to assign different rewards to them, according to her subjective similarity among events; e.g. a prediction of overcast is closer to sunny than rainy. We propose concrete methods so that the principal can assign rewards for incorrect predictions according to her similarity between events. We focus on two representative examples of strictly proper scoring rules: spherical and quadratic, where a worker’s expected utility is represented as the inner product of her truthful predictive probability and her declared probability. In this paper, we generalize the inner product by introducing a reward matrix that defines a reward for each prediction outcome pair. We first show that if the reward matrix is symmetric and positive definite, both the spherical and quadratic proper scoring rules guarantee the maximization of a worker’s expected utility when she truthfully declares her prediction. We next compare our rules with the original spherical/quadratic proper scoring rules in terms of the variance of rewards obtained by workers. Finally, we show our experimental results using Amazon Mechanical Turk.

Original languageEnglish
Title of host publicationPRIMA 2015
Subtitle of host publicationPrinciples and Practice of Multi-Agent Systems - 18th International Conference, Proceedings
EditorsPaolo Torroni, Andrea Omicini, Jane Hsu, Qingliang Chen, Paolo Torroni, Andrea Omicini, Jane Hsu, Qingliang Chen, Serena Villata, Serena Villata
PublisherSpringer Verlag
Pages400-415
Number of pages16
ISBN (Print)9783319255231, 9783319255231
DOIs
Publication statusPublished - Jan 1 2015
Event18th International Conference on Principles and Practice of Multi-Agent Systems, PRIMA 2015 - Bertinoro, Italy
Duration: Oct 26 2015Oct 30 2015

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9387
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other18th International Conference on Principles and Practice of Multi-Agent Systems, PRIMA 2015
CountryItaly
CityBertinoro
Period10/26/1510/30/15

Fingerprint

Reward
Scoring
Prediction
Expected Utility
Scalar, inner or dot product
Assign
Strictly
Probability distributions
Predictive Distribution
Categorical
Positive definite
Probability Distribution
Vary
Generalise
Experimental Results

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Sakurai, Y., Shinoda, M., Oyama, S., & Yokoo, M. (2015). Flexible reward plans for crowdsourced tasks. In P. Torroni, A. Omicini, J. Hsu, Q. Chen, P. Torroni, A. Omicini, J. Hsu, Q. Chen, S. Villata, ... S. Villata (Eds.), PRIMA 2015: Principles and Practice of Multi-Agent Systems - 18th International Conference, Proceedings (pp. 400-415). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9387). Springer Verlag. https://doi.org/10.1007/978-3-319-25524-8_25

Flexible reward plans for crowdsourced tasks. / Sakurai, Yuko; Shinoda, Masato; Oyama, Satoshi; Yokoo, Makoto.

PRIMA 2015: Principles and Practice of Multi-Agent Systems - 18th International Conference, Proceedings. ed. / Paolo Torroni; Andrea Omicini; Jane Hsu; Qingliang Chen; Paolo Torroni; Andrea Omicini; Jane Hsu; Qingliang Chen; Serena Villata; Serena Villata. Springer Verlag, 2015. p. 400-415 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9387).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Sakurai, Y, Shinoda, M, Oyama, S & Yokoo, M 2015, Flexible reward plans for crowdsourced tasks. in P Torroni, A Omicini, J Hsu, Q Chen, P Torroni, A Omicini, J Hsu, Q Chen, S Villata & S Villata (eds), PRIMA 2015: Principles and Practice of Multi-Agent Systems - 18th International Conference, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9387, Springer Verlag, pp. 400-415, 18th International Conference on Principles and Practice of Multi-Agent Systems, PRIMA 2015, Bertinoro, Italy, 10/26/15. https://doi.org/10.1007/978-3-319-25524-8_25
Sakurai Y, Shinoda M, Oyama S, Yokoo M. Flexible reward plans for crowdsourced tasks. In Torroni P, Omicini A, Hsu J, Chen Q, Torroni P, Omicini A, Hsu J, Chen Q, Villata S, Villata S, editors, PRIMA 2015: Principles and Practice of Multi-Agent Systems - 18th International Conference, Proceedings. Springer Verlag. 2015. p. 400-415. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-25524-8_25
Sakurai, Yuko ; Shinoda, Masato ; Oyama, Satoshi ; Yokoo, Makoto. / Flexible reward plans for crowdsourced tasks. PRIMA 2015: Principles and Practice of Multi-Agent Systems - 18th International Conference, Proceedings. editor / Paolo Torroni ; Andrea Omicini ; Jane Hsu ; Qingliang Chen ; Paolo Torroni ; Andrea Omicini ; Jane Hsu ; Qingliang Chen ; Serena Villata ; Serena Villata. Springer Verlag, 2015. pp. 400-415 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{c61bc468fd3843f59078d6e43a0cfda4,
title = "Flexible reward plans for crowdsourced tasks",
abstract = "We develop flexible reward plans to elicit truthful predictive probability distribution over a set of uncertain events from workers. In general, strictly proper scoring rules for categorical events only reward a worker for an event that actually occurred. However, different incorrect predictions vary in quality, and the principal would like to assign different rewards to them, according to her subjective similarity among events; e.g. a prediction of overcast is closer to sunny than rainy. We propose concrete methods so that the principal can assign rewards for incorrect predictions according to her similarity between events. We focus on two representative examples of strictly proper scoring rules: spherical and quadratic, where a worker’s expected utility is represented as the inner product of her truthful predictive probability and her declared probability. In this paper, we generalize the inner product by introducing a reward matrix that defines a reward for each prediction outcome pair. We first show that if the reward matrix is symmetric and positive definite, both the spherical and quadratic proper scoring rules guarantee the maximization of a worker’s expected utility when she truthfully declares her prediction. We next compare our rules with the original spherical/quadratic proper scoring rules in terms of the variance of rewards obtained by workers. Finally, we show our experimental results using Amazon Mechanical Turk.",
author = "Yuko Sakurai and Masato Shinoda and Satoshi Oyama and Makoto Yokoo",
year = "2015",
month = "1",
day = "1",
doi = "10.1007/978-3-319-25524-8_25",
language = "English",
isbn = "9783319255231",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "400--415",
editor = "Paolo Torroni and Andrea Omicini and Jane Hsu and Qingliang Chen and Paolo Torroni and Andrea Omicini and Jane Hsu and Qingliang Chen and Serena Villata and Serena Villata",
booktitle = "PRIMA 2015",
address = "Germany",

}

TY - GEN

T1 - Flexible reward plans for crowdsourced tasks

AU - Sakurai, Yuko

AU - Shinoda, Masato

AU - Oyama, Satoshi

AU - Yokoo, Makoto

PY - 2015/1/1

Y1 - 2015/1/1

N2 - We develop flexible reward plans to elicit truthful predictive probability distribution over a set of uncertain events from workers. In general, strictly proper scoring rules for categorical events only reward a worker for an event that actually occurred. However, different incorrect predictions vary in quality, and the principal would like to assign different rewards to them, according to her subjective similarity among events; e.g. a prediction of overcast is closer to sunny than rainy. We propose concrete methods so that the principal can assign rewards for incorrect predictions according to her similarity between events. We focus on two representative examples of strictly proper scoring rules: spherical and quadratic, where a worker’s expected utility is represented as the inner product of her truthful predictive probability and her declared probability. In this paper, we generalize the inner product by introducing a reward matrix that defines a reward for each prediction outcome pair. We first show that if the reward matrix is symmetric and positive definite, both the spherical and quadratic proper scoring rules guarantee the maximization of a worker’s expected utility when she truthfully declares her prediction. We next compare our rules with the original spherical/quadratic proper scoring rules in terms of the variance of rewards obtained by workers. Finally, we show our experimental results using Amazon Mechanical Turk.

AB - We develop flexible reward plans to elicit truthful predictive probability distribution over a set of uncertain events from workers. In general, strictly proper scoring rules for categorical events only reward a worker for an event that actually occurred. However, different incorrect predictions vary in quality, and the principal would like to assign different rewards to them, according to her subjective similarity among events; e.g. a prediction of overcast is closer to sunny than rainy. We propose concrete methods so that the principal can assign rewards for incorrect predictions according to her similarity between events. We focus on two representative examples of strictly proper scoring rules: spherical and quadratic, where a worker’s expected utility is represented as the inner product of her truthful predictive probability and her declared probability. In this paper, we generalize the inner product by introducing a reward matrix that defines a reward for each prediction outcome pair. We first show that if the reward matrix is symmetric and positive definite, both the spherical and quadratic proper scoring rules guarantee the maximization of a worker’s expected utility when she truthfully declares her prediction. We next compare our rules with the original spherical/quadratic proper scoring rules in terms of the variance of rewards obtained by workers. Finally, we show our experimental results using Amazon Mechanical Turk.

UR - http://www.scopus.com/inward/record.url?scp=84950336549&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84950336549&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-25524-8_25

DO - 10.1007/978-3-319-25524-8_25

M3 - Conference contribution

AN - SCOPUS:84950336549

SN - 9783319255231

SN - 9783319255231

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 400

EP - 415

BT - PRIMA 2015

A2 - Torroni, Paolo

A2 - Omicini, Andrea

A2 - Hsu, Jane

A2 - Chen, Qingliang

A2 - Torroni, Paolo

A2 - Omicini, Andrea

A2 - Hsu, Jane

A2 - Chen, Qingliang

A2 - Villata, Serena

A2 - Villata, Serena

PB - Springer Verlag

ER -