PapersFlow Research Brief

Physical Sciences · Computer Science

Mobile Crowdsensing and Crowdsourcing
Research Guide

What is Mobile Crowdsensing and Crowdsourcing?

Mobile Crowdsensing and Crowdsourcing is the use of mobile devices and online crowdsourcing platforms like Amazon's Mechanical Turk to collect data, perform participatory sensing, and conduct research while addressing challenges in data quality, incentive mechanisms, and truth discovery.

The field encompasses 21,085 works focused on crowdsourcing platforms for research and data collection. Key areas include data quality assessment, incentive mechanisms, mobile sensing, truth discovery, and applications in behavioral research and participatory sensing. Studies highlight platforms such as Amazon's Mechanical Turk and alternatives like Prolific.ac for experimental subject recruitment.

Topic Hierarchy

100%
graph TD D["Physical Sciences"] F["Computer Science"] S["Computer Science Applications"] T["Mobile Crowdsensing and Crowdsourcing"] D --> F F --> S S --> T style T fill:#DC5238,stroke:#c4452e,stroke-width:2px
Scroll to zoom • Drag to pan
21.1K
Papers
N/A
5yr Growth
339.0K
Total Citations

Research Sub-Topics

Why It Matters

Mobile Crowdsensing and Crowdsourcing enables low-cost, scalable data collection for experimental research, as demonstrated by Berinsky et al. (2012) who evaluated Amazon's Mechanical Turk for subject recruitment, showing its internal and external validity comparable to traditional methods with 4008 citations. Paolacci et al. (2010) confirmed MTurk's data quality through demographic analysis and experimental results, facilitating behavioral studies with 3752 citations. These platforms support participatory sensing and truth discovery in mobile contexts, with Meade and Craig (2012) providing methods to detect careless responses in survey data, improving reliability across 3295 cited works. Palan and Schitter (2017) introduced Prolific.ac as an alternative subject pool, expanding access for online experiments with 3089 citations.

Reading Guide

Where to Start

"Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk" by Berinsky et al. (2012), as it provides a foundational assessment of MTurk's validity for research recruitment, central to understanding crowdsourcing platforms.

Key Papers Explained

Berinsky et al. (2012) in "Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk" establishes MTurk's experimental validity, which Paolacci et al. (2010) in "Running experiments on Amazon Mechanical Turk" builds upon by validating data quality and demographics. Meade and Craig (2012) in "Identifying careless responses in survey data" extends this by offering detection methods for MTurk data issues. Palan and Schitter (2017) in "Prolific.ac—A subject pool for online experiments" connects as an alternative, addressing MTurk limitations. Gubbi et al. (2013) in "Internet of Things (IoT): A vision, architectural elements, and future directions" provides IoT context for mobile sensing integration.

Paper Timeline

100%
graph LR P0["Probabilistic Matrix Factorization
2007 · 3.6K cites"] P1["Running experiments on Amazon Me...
2010 · 3.8K cites"] P2["Fog computing and its role in th...
2012 · 5.9K cites"] P3["Evaluating Online Labor Markets ...
2012 · 4.0K cites"] P4["Internet of Things IoT : A visi...
2013 · 11.7K cites"] P5["Federated Learning: Challenges, ...
2020 · 4.1K cites"] P6["Advances and Open Problems in Fe...
2020 · 4.0K cites"] P0 --> P1 P1 --> P2 P2 --> P3 P3 --> P4 P4 --> P5 P5 --> P6 style P4 fill:#DC5238,stroke:#c4452e,stroke-width:2px
Scroll to zoom • Drag to pan

Most-cited paper highlighted in red. Papers ordered chronologically.

Advanced Directions

Current directions emphasize federated learning for decentralized mobile data training, as in Li et al. (2020) "Federated Learning: Challenges, Methods, and Future Directions" and Kairouz et al. (2020) "Advances and Open Problems in Federated Learning." Fog computing extensions for low-latency mobile sensing appear in Bonomi et al. (2012) "Fog computing and its role in the internet of things." No recent preprints or news available.

Papers at a Glance

# Paper Year Venue Citations Open Access
1 Internet of Things (IoT): A vision, architectural elements, an... 2013 Future Generation Comp... 11.7K
2 Fog computing and its role in the internet of things 2012 5.9K
3 Federated Learning: Challenges, Methods, and Future Directions 2020 IEEE Signal Processing... 4.1K
4 Advances and Open Problems in Federated Learning 2020 Foundations and Trends... 4.0K
5 Evaluating Online Labor Markets for Experimental Research: Ama... 2012 Political Analysis 4.0K
6 Running experiments on Amazon Mechanical Turk 2010 Judgment and Decision ... 3.8K
7 Probabilistic Matrix Factorization 2007 Neural Information Pro... 3.6K
8 A Survey of Collaborative Filtering Techniques 2009 Advances in Artificial... 3.6K
9 Identifying careless responses in survey data. 2012 Psychological Methods 3.3K
10 Prolific.ac—A subject pool for online experiments 2017 Journal of Behavioral ... 3.1K

Frequently Asked Questions

What is Amazon's Mechanical Turk used for in research?

Amazon's Mechanical Turk serves as a platform for recruiting subjects in experimental research, offering low-cost and easy-to-field experiments. Berinsky et al. (2012) assessed its internal and external validity, finding it suitable for political analysis studies. Paolacci et al. (2010) validated data quality through demographic data and attention checks.

How is data quality ensured in crowdsourcing surveys?

Data quality in crowdsourcing surveys is maintained by identifying careless responses using techniques like consistency checks and attention questions. Meade and Craig (2012) outlined methods for detecting such responses in anonymous Internet surveys, particularly under obligatory participation conditions. These approaches enhance reliability in behavioral research.

What role does mobile sensing play in crowdsourcing?

Mobile sensing leverages devices for participatory data collection in crowdsourcing, integrated with platforms like Mechanical Turk. The field description notes its focus on mobile sensing alongside incentive mechanisms and truth discovery. Gubbi et al. (2013) connect it to IoT architectures enabling widespread sensing applications.

What are incentive mechanisms in mobile crowdsensing?

Incentive mechanisms encourage participation in mobile crowdsensing by rewarding data contributions on crowdsourcing platforms. The topic cluster explores these alongside data quality and online labor markets. Studies like those on Mechanical Turk address trade-offs in worker motivation for research tasks.

What alternatives exist to Mechanical Turk for experiments?

Prolific.ac provides a subject pool for online experiments as an alternative to Mechanical Turk. Palan and Schitter (2017) describe its services explicitly designed for research, with growing adoption. It addresses limitations in data quality and participant demographics observed in MTurk studies.

Open Research Questions

  • ? How can incentive mechanisms be optimized for heterogeneous mobile devices in crowdsensing while preserving data privacy?
  • ? What methods improve truth discovery in participatory sensing data from unreliable crowdsourcing workers?
  • ? How do careless responses impact the validity of behavioral research conducted via online labor markets?
  • ? What architectural elements integrate fog computing with mobile crowdsensing for low-latency applications?
  • ? How does federated learning address data decentralization challenges in mobile crowdsourcing networks?

Research Mobile Crowdsensing and Crowdsourcing with AI

PapersFlow provides specialized AI tools for Computer Science researchers. Here are the most relevant for this topic:

See how researchers in Computer Science & AI use PapersFlow

Field-specific workflows, example queries, and use cases.

Computer Science & AI Guide

Start Researching Mobile Crowdsensing and Crowdsourcing with AI

Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.

See how PapersFlow works for Computer Science researchers