SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation








Call for Participation



Proceedings: [.pdf] [.bib]

Workshop Program (July 23, 2010)


Invited Talks

Design of experiments for crowdsourcing search evaluation: challenges and opportunities.

Omar Alonso, Microsoft Bing.

Additional reference: slides from Alonso's ECIR 2010 Tutorial.

Insights into Mechanical Turk.

Adam Bradley, Amazon.

Better Crowdsourcing through Automated Methods for Quality Control.

Lukas Biewald, CrowdFlower.

Paper Presentations

Crowdsourcing for Affective Annotation of Video: Development of a Viewer-reported Boredom Corpus.

Mohammad Soleymani and Martha Larson
Runner-Up: Most Innovative Paper Award ($100 USD prize thanks to Microsoft Bing)


Crowdsourcing Preference Judgments for Evaluation of Music Similarity Tasks.

Julian Urbano, Jorge Morato, Monica Marrero and Diego Martin
Winner: Most Innovative Paper Award ($400 USD prize thanks to Microsoft Bing)

Ensuring quality in crowdsourced search relevance evaluation.

John Le, Andy Edmonds, Vaughn Hester and Lukas Biewald

An Analysis of Assessor Behavior in Crowdsourced Preference Judgments.

Dongqing Zhu and Ben Carterette

Logging the Search Self-Efficacy of Amazon Mechanical Turkers.

Henry Feild, Rosie Jones, Robert C. Miller, Rajeev Nayak, Elizabeth F. Churchill and Emre Velipasaoglu

Crowdsourcing a News Query Classification Dataset.

Richard M. C. McCreadie, Craig Macdonald and Iadh Ounis


Detecting Uninteresting Content in Text Streams.

Omar Alonso, Chad Carson, David Gerster, Xiang Ji, Shubha U. Nabar