Dagstuhl-Seminar 13361
Crowdsourcing: From Theory to Practice and Long-Term Perspectives
( 01. Sep – 04. Sep, 2013 )
Permalink
Organisatoren
- Claudio Bartolini (HP Labs - Palo Alto, US)
- Tobias Hoßfeld (Universität Würzburg, DE)
- Phuoc Tran-Gia (Universität Würzburg, DE)
- Maja Vukovic (IBM TJ Watson Research Center - Yorktown Heights, US)
Kontakt
- Annette Beyer (für administrative Fragen)
Presse/News
Programm
Crowdsourcing is a newly emerging service platform and business model in the Internet. In contrast to outsourcing, where a job is performed by a designated worker or employee, crowdsourcing means to outsource a job to a large, anonymous crowd of workers, the so-called human cloud, in the form of an open call. Current research in crowdsourcing addresses the following issues: crowdsourcing as a novel methodology for user-centered research; development of new services and applications based on human sensing, computation, and problem solving; engineering of improved crowdsourcing platforms including quality control mechanisms; incentive design and gamification of work; usage of crowdsourcing for professional business; theoretical frameworks for evaluation. The topic on crowdsourcing may have a huge impact on the Internet and its technical infrastructure, on society, and the future of work. Therefore, this seminar helps coordinating research efforts in the different communities, especially in US currently leading the crowdsourcing market and in Europe. In summary, crowdsourcing will be a guiding paradigm and form the evolution of work in the next years.
This Dagstuhl Seminar brings together experts from the different research fields as well as experts from industry with a practical background on the deployment, operation or usage of crowdsourcing platforms. From industry, real-world problem statements, requirements and challenges, position statements, innovative use cases, and practical experiences are desired. The collection and analysis of practical experiences of the different crowdsourcing stakeholders are one of the important outcomes of the Dagstuhl Seminar. Since the aim is to bring together researchers from academia and industry, the identification of problems and challenges and how to tackle them will be very inspiring. In particular, platform providers may report about their experiences and lessons learned when building and operating the platform. But also the experiences from experiment designers and employers are crucial, what are pitfalls or harming user engagement, how to foster the quality of work.
As the field of crowdsourcing is novel, a common terminology, classification and taxonomy of crowdsourcing systems, as well as evaluation frameworks are required. This will be one of the goals of the Dagstuhl Seminar to derive such a research methodology. The impact of crowdsourcing from different perspectives shall be discussed. In this context, we invite experts from different research disciplines to discuss the impact on society, on business and economics, on law, but also on the Internet infrastructure. It has to be clearly noted that the scope of the seminar is on technical challenges, but the potential impact and long-term perspectives have to be discussed from an interdisciplinary point of view, too. Theoretical results and research methodologies from different disciplines may improve current platforms and applications.
Within the Dagstuhl Seminar, the following topics will be addressed.
- Quo vadis Crowdsourcing: From sensing to problem solving: Crowdsourcing may be used for simple, micro-tasks like sensing of information (e.g. with smartphones), tagging of data (e.g. labeling of pictures). The challenge is on automatic sensing, where relevant data (e.g. for environment pollution sensing) is collected and analyzed automatically and in a collaborative way among users. Furthermore, the energy consumption of mobile devices with mobile human sensors has to be addressed. As a next step of crowdsourcing, the crowd is used as problem solver. In this context, human-based computation is gaining momentum where the machine cloud "asks" the human cloud to solve a problem. The solutions proposed then have to be collected, aggregated, interpreted, evaluated and integrated.
- Mechanisms for improving crowdsourcing: In order to reach the long-term perspectives of crowd¬sourcing, mechanisms are required which allow to improve the current platforms. Therefore, various requirements to the platforms exists which have to be addressed. Those mechanisms address recommendation systems, anonymous user profiles and specialized crowds, automated task design, incentive design, quality assurance and reliability.
- Applications and Use Cases: The development of improved crowdsourcing mechanisms will be based on relevant and innovative use cases. These use cases cover orthogonal dimensions that are type of work and operational conditions: cowdsensing considers in particular mobile or participatory sensing part of ubiquitous crowdsourcing; crowdsolving lets users perform tasks in categories such as field research, photography and media, etc; crowdtesting utilizes the human cloud for conducting scientific studies, e.g. in the context of user perceived quality; another relevant category considers data extraction, e.g. to sort data according to certain criteria like quality of music; further applications consider crowdvoting for gathering opinions and trends, as well as crowdwisdom to gather knowledge e.g. Wikipedia.
- Operational conditions: In this context, we differentiate between enterprise crowdsourcing for professional usage within an enterprise. In that context, the separation of efforts that engage employees vs. public crowds is to be considered. Real-time crowdsourcing addresses the completion of work in real-time and invokes additional challenges, also on the technical infrastructure to cope with massive requests at the same time, i.e. flash crowd effects as known from P2P networks. As a result of ubiquitous connectivity and advances in mobile technologies, ubiquitous crowdsourcing emerges with mobile users seamlessly forming interactive networks and participation in a variety of tasks involving gathering, analyzing and sharing data, such as reporting security threats, natural disasters, or information for location-based services.
The following research questions will be addressed in the seminar towards a common research methodology and long-term perspectives of crowdsourcing.
- What is an appropriate taxonomy for classifying and analyzing crowdsourcing systems? How can crowdsourcing tasks be grouped based on their task complexity and along key challenges in successfully harvesting expertise of large human networks?
- Which use cases and applications will exploit the potential of crowdsourcing?
- How does the research community approach improved crowdsourcing mechanisms e.g. for quality and cost control or reliability of users and devices? Which requirements and challenges occur for particular operational conditions, like ubiquitous crowdsourcing due to the user mobility in time and space?
- How to design incentive schemes for coordinated problem solving of the crowd among individual humans with their own goals and interests? How to realize gamification of work for improved user engagement? How to identify expertise of users? How to implement such incentive schemes technically?
- How can the experiment and task design be standardized? Which kinds of APIs or templates are promising and useful in practice?
- What are the objectives to be fulfilled and the necessary capabilities of platforms towards the provision of Future Internet services built on top of crowdsourcing facilities?
- How can crowdsourcing systems be evaluated? Which common research methodologies are applicable? Which theories and models from a number various fields are applicable, including artificial intelligence, multi-agent systems, game theory, operations research, or human-computer interaction? How to include human-centric measures such as costs, availability, dependability and usability, including device-specific properties in evaluation frameworks?
- How does the research agenda for crowdsourcing look like in the next years?
Please note that a topically related seminar on "Cloud-based Software Crowdsouring", organized by Michael N. Huhns, Wei Li, Martin Schader and Wei-Tek Tsal,( Dagstuhl Seminar 13362) takes place in parallel to this seminar. We plan common sessions on topics of mutual interest.
- Was ist Crowdsourcing? Press Release (in German)
Over the past several years crowdsourcing has emerged as a new research theme, but also as a new service platform and Internet for harnessing the skills of the large, network-connected crowd on-line. Whilst the research community has not just yet recognized crowdsourcing as an entirely new discipline, many research challenges remain open and need to be addressed to ensure its successful applications in academia, industry and public sectors. Crowdsourcing research intersects many existing domains and brings to the surface new challenges, such as crowdsourcing as a novel methodology for user-centered research; development of new services and applications based on human sensing, computation and problem solving; engineering of improved crowdsourcing platforms including quality control mechanisms; incentive design and gamification of work; usage of crowdsourcing for professional business; theoretical frameworks for evaluation. Crowdsourcing, as a new means of engaging human capital online is increasingly having an impact on the Internet and its technical infrastructure, on society, and the future of work.
With crowdsourcing gaining momentum and becoming mainstream, the objective of this Dagstuhl seminar was to lead coordination of research efforts in the different communities, especially in US currently leading the crowdsourcing market and in Europe. The seminar engaged experts from the different research fields (e.g. sociology to image processing) as well as experts from industry with a practical background on the deployment, operation or usage of crowdsourcing platforms. From industry, real-world problem statements, requirements and challenges, position statements, innovative use cases, and practical experiences are tackled and discussed. The collection and analysis of practical experiences of the different crowdsourcing stakeholders were key outcomes of the Dagstuhl Seminar. The seminar was structured so that the participants use existing use cases, as a driver in the discussion to envisions future perspectives of this domain. To move forward, we identified the need for a common terminology, classification and taxonomy of crowdsourcing systems, as well as evaluation frameworks; and have already proposed a blueprint of the same. The impact of crowdsourcing from different perspectives has been discussed, by participants' viewpoints stemming from societal, business, economic, legal and infrastructure perspectives.
From platform provider side, Nhatvi Nguyen (Sec. 3.11) showed the actual challenges in operating a crowdsourcing platform. As industry use case, the example of enterprise crowdsourcing was presented by Maja Vukovic (Sec. 3.14), where the rapid generation of a snapshot of the state of IT systems and operation is conducted by means of crowdsourcing. This allows for massive cost savings within the company by uncovering knowledge critical to IT services delivery. Crowdsensing is another industry use case presented in the seminar by Florian Zeiger (Sec. 3.15). Environmental sensing in the area of safety and security was discussed from industry point of view along with the challenges and open questions, e.g. user privacy, data quality and integrity, efficient and reliable data collection, as well as architectural decisions and flexible support of various business models. A concrete application for crowdsensing is radiation sensing as shown by Shinichi Konomi (Sec. 3.7).
Beyond this, there were also discussions on multimedia related use cases. Crowdsourcing can be efficiently used for describing and interpreting multimedia on the Internet and allows to better address other aspects of multimedia with meaning for human beings. Martha Larson (Sec. 3.10) provided examples of these aspects like the emotional impact of multimedia content, and judgments concerning which multimedia is best suited for a given purpose. Klaus Diepold (Sec. 3.6) applied crowdsourcing to move subjective video quality tests from the lab into the crowd. The resulting ratings are used to train mathematical model for predicting subjective quality of video sequences. Multivariate data analysis tools are recommended to incorporate contextual information to further validate the mathematical model. Vassilis Kostakos (Sec. 3.8) showed that the data quality of appropriate subjective tests may be increased by using public displays and touch screens in cities compared to online surveys. While gamification pops up as buzzword aiming among others at increased data quality, Markus Krause (Sec. 3.9) mentioned that the player should be put first i.e. the desires of player are paramount. In particular, task and game ideas need to be able to be linked, while fun has to be the main motivator for the game.
General approaches to improve crowdsourcing and the resulting data quality were a topic of interest by several participants. Gianluca Demartini (Sec. 3.5) proposes to model workers in the crowd as basis for quality assurance mechanisms. Alessandro Bozzon (Sec. 3.2) demanded for better conceptual abstractions for crowd tasks and processes design and (automatic) generation; better understanding of crowds properties such as (soft and hard) skills, reliability, availability, capacity, precision; and better tools for measuring and driving worker engagement. Cristina Cabanillas (Sec. 3.3) considered the human resource management aspects starting from workflows to crowdsourcing. Abraham Bernstein (Sec. 3.1) discussed human computers as part of computational processes, however, with their own strengths and issues. The three traits on human computation, that are motivational diversity, cognitive diversity, and error diversity, are embraced as strengths instead of weaknesses. While the main focus of the seminar was on technical challenges, the potential impact and long-term perspectives were discussed from an interdisciplinary point of view too, given the social and human aspects of crowdsourcing. Those issues were also raised by Phuoc Tran-Gia (Sec. 3.13) and Joseph G. Davis (Sec. 3.14). Overall there were 22 participants from 9 countries and 16 institutions. The seminar was held over 2.5 days, and included presentations by researcher and specific hands-on discussion sessions to identify challenges, evaluate viewpoints and develop a research agenda for crowdsourcing. While the abstracts of the talks can be found in Section 3, a summary of the discussions arising from those impulse talks is given in Section 7. Additional abstracts and research statements without any presentation in the plenary are also included in the report in Section 4. The different aspects of crowdsourcing were discussed in more detail in four different working groups formed during the seminar: (W1) long-term perspectives & impact on economics in five years, (W2) theory: taxonomy and dimensions of crowdsourcing, (W3) industry use cases, (W4) crowdsourcing mechanisms and design. The summary of those working groups can be found in Section 5.
Please note that a related seminar on "Cloud-based Software Crowdsouring" (Dagstuhl Seminar 13362), organized by Michael N. Huhns, Wei Li, Martin Schader and Wei-Tek Tsai, took place in parallel to this seminar. We held a joint social event and a session on discussing research challenges and planned publications. In this late night session, on one hand ethical issues in the area of crowdsourcing were raised in a stimulus talk by Martha Larson (TU Delft). On the other hand, Munindar P. Singh (North Carolina State University) intended to provoke with his talk on the critique of current research in the area of social computing and crowdsourcing. A summary can also be found in Section 7.
A comprehensive list of open problems and challenges in the area of crowdsourcing as observed and stated by the participants is another key outcome of the seminar which is provided in Section 6.
- Abraham Bernstein (Universität Zürich, CH) [dblp]
- Kathrin Borchert (Universität Würzburg, DE) [dblp]
- Alessandro Bozzon (TU Delft, NL) [dblp]
- Cristina Cabanillas (Wirtschaftsuniversität Wien, AT) [dblp]
- Joseph Davis (The University of Sydney, AU) [dblp]
- Gianluca Demartini (University of Fribourg, CH) [dblp]
- Klaus Diepold (TU München, DE) [dblp]
- Matthias Hirth (Universität Würzburg, DE) [dblp]
- Tobias Hoßfeld (Universität Würzburg, DE) [dblp]
- Andreas Hotho (Universität Würzburg, DE) [dblp]
- Deniz Iren (Middle East Technical University - Ankara, TR) [dblp]
- Christian Keimel (TU München, DE) [dblp]
- Shinichi Konomi (University of Tokyo, JP) [dblp]
- Vassilis Kostakos (University of Oulu, FI) [dblp]
- Markus Krause (Universität Hannover, DE) [dblp]
- Martha A. Larson (TU Delft, NL) [dblp]
- Babak Naderi (TU Berlin, DE) [dblp]
- Nhatvi Nguyen (Weblabcenter, Inc. - Texas, US) [dblp]
- Munindar P. Singh (North Carolina State University - Raleigh, US) [dblp]
- Phuoc Tran-Gia (Universität Würzburg, DE) [dblp]
- Maja Vukovic (IBM TJ Watson Research Center - Yorktown Heights, US) [dblp]
- Florian Zeiger (AGT International - Darmstadt, DE) [dblp]
Klassifikation
- mobile computing
- networks
- society / human-computer interaction
Schlagworte
- Crowdsourcing
- human computation
- mobile crowdsourcing
- enterprise crowdsourcing