The Internet, Policy & Politics Conferences

Oxford Internet Institute, University of Oxford

Sara C. Kingsley, Mary L. Gray, Siddharth Suri: Monopsony and the Crowd: Labor for Lemons?

Sara C. Kingsley, University of Massachusetts Amherst

Mary L. Gray, Microsoft Research

Siddharth Suri, Microsoft Research

 

Commercial, crowdsourcing labor markets like Amazon Mechanical Turk (AMT) purportedly facilitate a limitless supply of contractual matches. In theory thousands of individuals who want to earn money and the employers with jobs to offer can find each other through micro-tasks listed on “crowdwork” platforms. A direct and limitless supply of labor and tasks should produce a perfectly competitive market; however, data collected from our yearlong study of crowdwork suggests that the reverse is true. Rife with asymmetric information problems, crowdsourcing labor markets are arguably not just imperfect, but imperfect by design. When croudsourcing labor is exchanged through commercial platforms, concerning time-of-contracting issues occur similar to those found in labor markets where “monopsonistic competition” prevails.

Monopsony typically describes a situation where an employer has a degree of wage-setting power due to the limited number of employment opportunities available to a pool of workers. When employers hold “some power over their workers” notable consequences include employment discrimination, “rents to jobs”, and a diminished quality of good and services from worker output. The inequitable distribution of market power especially disadvantages vulnerable workers who do not often have as much bargaining power compared to employers and for reasons connected to socioeconomic status and demographics. 

While some research on crowdsourcing questions the nature and quality of contractual matching on crowdwork platforms, economists have yet to consider whether the observable lack of optimum crowd-task matching online is partially due to “monopsonistic competition.” For this reason, we have obtained 341 survey responses to date from participants who do crowdwork and currently live in the United States and India. Our survey asks about the ease of finding and completing tasks online, the availability of information about jobs and requesters, the average comprehension of task directions, and the level of self-reported digital literacy skills, among other questions. We believe our data demonstrates that market frictions particular to crowdsourcing labor platforms afford employers a degree of monopsony power that substantially limits the quality of online contractual matching. If our hypothesis is true, monopsonistic competition in crowdsourcing labor markets negatively impacts employers and individual crowdworkers alike, and requires novel platform and policy remedies to mitigate the known detriments that monopsony has for the well-being of labor markets.

Labor economists, however, do not often look at the more aggregate structural components of labor markets to explain apparent inconsistencies with models for perfect competition given by standard theory. Alan Manning likewise states “labor economics consists of the competitive model with bits bolted into it when necessary to explain away anomalies.” He argues that when the competitive model does not explain market behavior, the model itself should become the questioned anomaly rather than modified to explain away what classic economic theory cannot justify. For this reason, Manning formulates a more dynamic model of competition that allows imperfect competition to define the behaviors of a given labor market. We believe models for imperfect competition provide a better fit to the dynamics we observe. An understanding of how these alternative models work is necessary, however, before practitioners, technologists, and researchers can best propose and apply remedies to the problematic frictions endemic to crowdsourcing work. 

New models that account for imperfect competition are similarly important for understanding the current nature and potential for the future of digital work. We must ask, for example, if traditional solutions to monopsony power, such as minimum wages and unionization, are viable options for digital labor markets. Further research is required to test how minimum wages could apply to markets that are run by what is essentially a digitized version of historic piecework incentive schemes that often emerge during periods of rapid technological change. 

Accordingly, our research extends Manning’s model for imperfect competition to an evaluation of how monopsony arises when labor is exchanged through crowdsourcing. Our project largely draws on findings from a yearlong, comparative ethnographic and quantitative study of the people who regularly use crowdsourcing labor platforms to find or post jobs online. Specifically, we look at the participants exchanging labor through three major crowdsourcing platforms: Amazon.com's Mechanical Turk (AMT); Microsoft’s Universal Human Relevance System (UHRS); and, MobileWorks, a startup with a social and entrepreneurial mission. We have already collected 341 survey responses from an online survey posted to Amazon Mechanical Turk (AMT) and will continue to gather data from this and other surveys over the next year. We also plan to analyze the backend data produced by workflows on UHRS and MobileWorks, and conduct textual analysis of online discussion forums used by crowdworkers. Our project concludes with insights from crowdworkers about how to reform online labor platforms, and we provide recommended next steps for research on this topic.

References: 
1. Manning, A. (2003). Monopsony in Motion: Imperfect competition in labor markets, Princeton University Press, New Jersey.
2. “Monopsony and the Crowd” is a component of an ongoing research projected entitled, “Crowded: Digital Piecework and the Politics of Platform Responsibility in Precarious times.” See: http://research.microsoft.com/en-us/projects/crowdwork/ 

Authors: 
Sara C. Kingsley, Mary L. Gray, Siddharth Suri