CrowdMM'13 is over. We had a great workshop. Thank you to everyone who have contributed to the success of the workshop. Please see photos and workshop report here. The slides from the opening presentation are available here.
- November 2, 2013

We are happy to announce that the Best Crowdsourcing Idea goes to: Claudia Hauff, TU Delft, "Evaluating the influence of geography and culture in text-to-image translations" Claudia receive US$500 credits sponsored by Microworkers.
- November 2, 2013

The technical program for CrowdMM'13 is now available.
- September 9, 2013

The keynote for CrowdMM'13, titled "When the Crowd Watches the Crowd: Understanding Impressions in Online Conversational Video", will be delivered by Daniel Gatica-Perez.
- July 29, 2013

The list of accepted papers and instruction for authors have been posted.
- July 24, 2013

We are pleased to announce the winning entries to the Crowdsourcing for Multimedia Ideas Competition.
- July 15, 2013

About CrowdMM

CrowdMM 2013 is the sequel to the highly successful inaugural CrowdMM 2012 workshop (See the workshop report, program, photos, and tweets here). The CrowdMM 2013 workshop will continue to foster close interactions among researchers interested in crowdsourcing methodologies and its application towards solving multimedia research challenges.


Crowdsourcing--leveraging a large number of human contributors and the capabilities of human computation--has enormous potential to address key challenges in the area of multimedia research. Applications of crowdsourcing range from the exploitation of unsolicited user contributions, such as using tags to aid image understanding, to utilizing crowdsourcing platforms and marketplaces to micro-outsource tasks such as semantic video annotation. Further, crowdsourcing offers a time- and resource-efficient method for collecting large volumes of input for system design or evaluation, making it possible to optimize multimedia systems more rapidly and to address human factors more effectively.

At present, crowdsourcing remains notoriously difficult to exploit effectively in multimedia settings, due to the high sensitivity of the users or workers to changes in the form and the parameterization of their activities. For example, on a crowdsourcing platform, workers are known to react differently depending on the way in which a multimedia annotation task is presented or explained and in the manner in which they are incentivized (e.g., compensation, appeal of the task). A thorough understanding of crowdsourcing for multimedia will be crucial in enabling the field to effectively address these challenges.

Call For Paper

Crowd image courtesy of James Cridland