論文誌 (国際) Empirical Study on Effects of Self-Correction in Crowdsourced Microtasks
MASAKI OBAYASHI (UNIVERSITY OF TSUKUBA), HIROMI MORITA (UNIVERSITY OF TSUKUBA), MASAKI MATSUBARA (UNIVERSITY OF TSUKUBA), NOBUYUKI SHIMIZU, ATSUYUKI MORISHIMA (UNIVERSITY OF TSUKUBA)
Human Computation Journal
Self-correction for crowdsourced tasks is a two-stage setting that allows a crowd worker to review the task results of other workers; the worker is then given a chance to update their results according to the review. Self-correction was proposed as a complementary approach to statistical algorithms, in which workers independently perform the same task. It can provide higher-quality results with low additional costs. However, thus far, the effects have only been demonstrated in simulations, and empirical evaluations are needed. In addition, as self-correction provides feedback to workers, an interesting question arises: whether perceptual learning is observed in self-correction tasks. This paper reports our experimental results on self-corrections with a real-world crowdsourcing service. We find that: (1) Self-correction is effective for making workers reconsider their judgments. (2) Self-correction is effective more if workers are shown task results of higher-quality workers during the second stage. (3) A perceptual learning effect is observed in some cases. Self-correction can provide feedback that shows workers how to provide high-quality answers in future tasks. (4) A Perceptual learning effect is observed, in particular, with workers who moderately change answers in the second stage. This suggests that we can measure the learning potential of workers. However, (5) no long-term effects of the self-correction task were transferred to other similar tasks. The results suggest that it is unlikely that long-term transfer effects occur between different self-correction microtasks. These findings imply that requesters/crowdsourcing services can construct a positive loop for better task results by the self-correction approach.