Crowd Sourcing

Crowd (or citizen) journalism emerged as an effective source of news during the Arab Spring, and during the London riots of 2011 became understood as a trustworthy one.

Crowdfunding came of age with product success such as the Pebble watch, and the Securities and Exchange Commission relaxing rules on equity crowdfunding.

Crowdmapping is also big, especially in post-conflict and developing areas with poor public infrastructure  – after the Haiti earthquake, with no maps of Port-au-Prince available, Haitians and international workers pieced together a crowdsourced map to aid the recovery.

Commercial crowdsourcing, or crowdworking, however, has a less noble and exciting image. For some, there is something slightly dubious about it. People being paid for ‘likes’. The market price for design and writing services being driven down to below the breadline. Amazon’s crowdworking platform, Mechanical Turk is even named after a colossal fraud. So – is this the truth?

Firstly, crowdworking actually describes a very broad range of activities. A BBC reporter has written a  thoroughly entertaining journey through an attempt to experience a variety of different kinds of crowdwork. It’s definitely worth reading to get an idea of the sheer breadth of forms of work.  Essentially, though, the concept is an on-demand work force. Some approaches, such as fiverr are an eclectic market place, where sellers advertise their service and price. (These range from the vague, ‘I will illustrate anything’ to the generic ‘I will write one search engine optimised article’ to the more specific ‘I will create 2x vintage retro logo badges in 24h’.) Others, such as Mechanical Turk, are microwork platforms. MTurk focuses on ‘Human Intelligence Tasks’ (HITS). These are ‘micro’ tasks, which generally take less than a minute and need to be performed thousands of times. It is particularly popular for database completion tasks, such as attaching photographer credits to photos, or finding a company URL. It uses duplication of tasks to ensure accuracy, for example, when translating a word the same word will be shown to several different HITs workers, and the most commonly suggested translation will be used.

MTurk runs a qualifications systems, whereby HITS requesters will specify that only HITS workers with a certain number of previous HITS under the belt and a certain level of approval rating can perform the task. And one of the most interesting usages of MTurk is for academic research. It has many applications – assessing responses to visual stimuli and experiments into behavioural economics and psychological motivations to name but two. In the world or web research, it can also be used to recruit people to a study, such as the Microsoft Research study on depression and Twitter users that sourced a ground truth dataset of 69,000 people through MTurk. Would this even have been possible another way? Perhaps there is a virtuous side to this maligned form of crowdsourcing after all.

Comments are closed.