The power of crowds – leveraging a large number of human contributors and the capabilities of human computation – has enormous potential to address key challenges in the area of multimedia research. Applications range from the exploitation of unsolicited user contributions, such as using tags to aid understanding of the visual content of yet-unseen images, to utilizing crowdsourcing platforms and marketplaces like Amazon’s Mechanical Turk and CrowdFlower, which micro-outsource tasks such as semantic video annotation to a large population of workers. Further, crowdsourcing offers a time- and resource-efficient method for collecting large volumes of input for system design and evaluation, making it possible to optimize multimedia systems more rapidly and to address human factors more effectively.
At present, crowdsourcing remains notoriously difficult to exploit effectively in multimedia settings: the challenge arises from the fact that a community of users or workers is a complex and dynamic system highly sensitive to changes in the form and the parameterization of their activities. For example, on a crowdsourcing platform, workers are known to react differently depending on the way in which a multimedia annotation task is presented or explained and in the manner in which they are incentivized (e.g., compensation, appeal of the task). A thorough understanding of crowdsourcing for multimedia will be crucial in enabling the field to effectively address these challenges.
The CrowdMM 2014 builds upon the successful experience of two previous editions, held in 2012 and 2013, the latter of which attracted more than 40 participants. It will solicit novel contributions to multimedia research that make use of human intelligence, but also take advantage of human plurality. We will especially encourage contributions that propose solutions for the key challenges that face widespread adoption of crowdsourcing paradigms in the multimedia research community. These include: identification of optimal crowd members (e.g., user expertise, worker reliability), providing effective explanations (i.e., good task design), controlling noise and quality in the results, designing incentive structures that do not breed cheating, adversarial environments, gathering necessary background information about crowd members without violating privacy, controlling descriptions of task.