The first Crowdsourcing at Scale workshop was held on November 9 at the conference on Human Computation and Crowdsourcing (HCOMP 2013) in Palm Springs, Calif. A big part of the workshop was a Shared Task Challenge where we invited workshop attendees to come up with their own way to accurately aggregate all the judgments, or crowd labels, for large crowdsourced datasets collected by Google and CrowdFlower. We had two winning teams!

From UC Irvine/MIT CSAIL (“Team UCI”) are Qiang Liu, Jian Peng and Alexander Ihler, who are interested in human computation/crowdsourcing because of its intrinsic connection to machine learning and the potential for these two fields to benefit from each other.

From Microsoft Research/Bing and University of Southampton (“Team MSR”) are Matteo Venanzi, John Guiver, Gabirella Kazai, and Pushmeet Kohli, who have researched how crowdsourcing models can be extended to information-rich settings and how these models can be scaled to large, real-world datasets.