### Abstract

We develop flexible reward plans to elicit truthful predictive probability distribution over a set of uncertain events from workers. In general, strictly proper scoring rules for categorical events only reward a worker for an event that actually occurred. However, different incorrect predictions vary in quality, and the principal would like to assign different rewards to them, according to her subjective similarity among events; e.g. a prediction of overcast is closer to sunny than rainy. We propose concrete methods so that the principal can assign rewards for incorrect predictions according to her similarity between events. We focus on two representative examples of strictly proper scoring rules: spherical and quadratic, where a worker’s expected utility is represented as the inner product of her truthful predictive probability and her declared probability. In this paper, we generalize the inner product by introducing a reward matrix that defines a reward for each prediction outcome pair. We first show that if the reward matrix is symmetric and positive definite, both the spherical and quadratic proper scoring rules guarantee the maximization of a worker’s expected utility when she truthfully declares her prediction. We next compare our rules with the original spherical/quadratic proper scoring rules in terms of the variance of rewards obtained by workers. Finally, we show our experimental results using Amazon Mechanical Turk.

Original language | English |
---|---|

Title of host publication | PRIMA 2015 |

Subtitle of host publication | Principles and Practice of Multi-Agent Systems - 18th International Conference, Proceedings |

Editors | Paolo Torroni, Andrea Omicini, Jane Hsu, Qingliang Chen, Paolo Torroni, Andrea Omicini, Jane Hsu, Qingliang Chen, Serena Villata, Serena Villata |

Publisher | Springer Verlag |

Pages | 400-415 |

Number of pages | 16 |

ISBN (Print) | 9783319255231, 9783319255231 |

DOIs | |

Publication status | Published - Jan 1 2015 |

Event | 18th International Conference on Principles and Practice of Multi-Agent Systems, PRIMA 2015 - Bertinoro, Italy Duration: Oct 26 2015 → Oct 30 2015 |

### Publication series

Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|

Volume | 9387 |

ISSN (Print) | 0302-9743 |

ISSN (Electronic) | 1611-3349 |

### Other

Other | 18th International Conference on Principles and Practice of Multi-Agent Systems, PRIMA 2015 |
---|---|

Country | Italy |

City | Bertinoro |

Period | 10/26/15 → 10/30/15 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Theoretical Computer Science
- Computer Science(all)

### Cite this

*PRIMA 2015: Principles and Practice of Multi-Agent Systems - 18th International Conference, Proceedings*(pp. 400-415). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9387). Springer Verlag. https://doi.org/10.1007/978-3-319-25524-8_25

**Flexible reward plans for crowdsourced tasks.** / Sakurai, Yuko; Shinoda, Masato; Oyama, Satoshi; Yokoo, Makoto.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*PRIMA 2015: Principles and Practice of Multi-Agent Systems - 18th International Conference, Proceedings.*Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9387, Springer Verlag, pp. 400-415, 18th International Conference on Principles and Practice of Multi-Agent Systems, PRIMA 2015, Bertinoro, Italy, 10/26/15. https://doi.org/10.1007/978-3-319-25524-8_25

}

TY - GEN

T1 - Flexible reward plans for crowdsourced tasks

AU - Sakurai, Yuko

AU - Shinoda, Masato

AU - Oyama, Satoshi

AU - Yokoo, Makoto

PY - 2015/1/1

Y1 - 2015/1/1

N2 - We develop flexible reward plans to elicit truthful predictive probability distribution over a set of uncertain events from workers. In general, strictly proper scoring rules for categorical events only reward a worker for an event that actually occurred. However, different incorrect predictions vary in quality, and the principal would like to assign different rewards to them, according to her subjective similarity among events; e.g. a prediction of overcast is closer to sunny than rainy. We propose concrete methods so that the principal can assign rewards for incorrect predictions according to her similarity between events. We focus on two representative examples of strictly proper scoring rules: spherical and quadratic, where a worker’s expected utility is represented as the inner product of her truthful predictive probability and her declared probability. In this paper, we generalize the inner product by introducing a reward matrix that defines a reward for each prediction outcome pair. We first show that if the reward matrix is symmetric and positive definite, both the spherical and quadratic proper scoring rules guarantee the maximization of a worker’s expected utility when she truthfully declares her prediction. We next compare our rules with the original spherical/quadratic proper scoring rules in terms of the variance of rewards obtained by workers. Finally, we show our experimental results using Amazon Mechanical Turk.

AB - We develop flexible reward plans to elicit truthful predictive probability distribution over a set of uncertain events from workers. In general, strictly proper scoring rules for categorical events only reward a worker for an event that actually occurred. However, different incorrect predictions vary in quality, and the principal would like to assign different rewards to them, according to her subjective similarity among events; e.g. a prediction of overcast is closer to sunny than rainy. We propose concrete methods so that the principal can assign rewards for incorrect predictions according to her similarity between events. We focus on two representative examples of strictly proper scoring rules: spherical and quadratic, where a worker’s expected utility is represented as the inner product of her truthful predictive probability and her declared probability. In this paper, we generalize the inner product by introducing a reward matrix that defines a reward for each prediction outcome pair. We first show that if the reward matrix is symmetric and positive definite, both the spherical and quadratic proper scoring rules guarantee the maximization of a worker’s expected utility when she truthfully declares her prediction. We next compare our rules with the original spherical/quadratic proper scoring rules in terms of the variance of rewards obtained by workers. Finally, we show our experimental results using Amazon Mechanical Turk.

UR - http://www.scopus.com/inward/record.url?scp=84950336549&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84950336549&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-25524-8_25

DO - 10.1007/978-3-319-25524-8_25

M3 - Conference contribution

AN - SCOPUS:84950336549

SN - 9783319255231

SN - 9783319255231

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 400

EP - 415

BT - PRIMA 2015

A2 - Torroni, Paolo

A2 - Omicini, Andrea

A2 - Hsu, Jane

A2 - Chen, Qingliang

A2 - Torroni, Paolo

A2 - Omicini, Andrea

A2 - Hsu, Jane

A2 - Chen, Qingliang

A2 - Villata, Serena

A2 - Villata, Serena

PB - Springer Verlag

ER -