It has been nearly a week since flight Malaysia Airlines MH370 disappeared some time after taking off from Kuala Lumpur International Airport. Despite searches led by the Malaysian authorities, there is still no trace of the plane.
There have been several red herrings. What appeared to be oil slicks were spotted off the coast of Malaysia, an oil rig worker reported seeing something on fire falling from the sky and fuzzy satellite images taken by Chinese satellites appeared to show what could be parts of a plane in the sea. All have come to nothing.
The Chinese government’s images are reported to have been taken just a day after the plane was lost but it took several days to find the shapes. And even then, Malaysian search parties reported finding nothing at the site.
Questions remain about the best approach to undertake a search like this in the age of the internet and artificial intelligence. Now hope is being put in the idea of using crowdsourcing to quickly and efficiently analyse satellite imagery. US-based satellite imagery firm DigitalGlobe is calling on the public to log on to a website and look at satellite images square by square, alerting others if they see anything important. Now more than 2 million people around the world are scouring the pictures for clues.
Searching in the crowd
Images from satellites, planes or even volunteers on the ground are increasingly being used by disaster responders to map out important features of an area. These images are fed, through web-based platforms, to volunteers who can then collaboratively identify specific elements in the pictures and annotate or even trace a map.
Google Crisis Mapper has been used to map out areas affected by storms in the US and satellite maps were used during the Haiti earthquake response in order to identify collapsed buildings and navigable routes.
The MH370 case, however, is a search mission, and is much more tricky. In 2007, volunteers joined efforts to look for the wreckage of businessman Steve Fossett’s plane after he crashed in California using Amazon Mechanical Turk and imagery provided by GeoEye. Even though 50,000 pairs of eyes were on the images, they failed to find the crash site. It is not clear why the crowdsourcing efforts failed in this case, while they succeeded in mapping disaster areas so effectively.
One possibility is that it is easier to delineate broad areas affected by disasters than to search for a particular pixel or set of pixels in a picture, which can be like finding a needle in a haystack. It requires good eyesight, an understanding of the features of the debris to be identified and knowledge about context, such as the height or angle from which the picture was taken and its resolution.
How to do it right
Our experience with crowdsourcing maps from satellite imagery in the ORCHID and Collabmap projects, has shown that such large crowdsourcing efforts require the use of a combination of human computation techniques and machine learning algorithms to get the best results.
To make it work, you need to train groups of human volunteers and machine learning algorithms before they start looking for wreckage. Without this training phase, reports from both algorithms and humans are likely to be noisy and prone to bias.
You also need to take account of the fact that human volunteers get tired when performing the same tasks over and over again. In traditional search operations, it is well known that volunteers need to take regular breaks otherwise they run the risk of missing something.
Crowdsourced volunteers may start to make mistakes when tagging and should be replaced by new recruits when they get tired, yet they are generally left to decide for themselves how to work.
Searches could be made more effective if the organisers tested volunteers every so often with a task for which the solution is known to see if they are losing their edge. Another option might be to let a machine learning algorithm determine when the volunteer gets worse at peforming tasks.
Many exercises like this also approach the problem from the wrong side. Websites generally ask volunteers to add a tag when they think they’ve seen something on the satellite imagery. In the case of MH370, they are asked to tag what might be a piece of debris, an oil slick or a life raft. Then, other users vote if they agree with the tag to decide what features in an image might mean.
This is potentially not that effective in a search exercise. It might be better to identify the areas that have been flagged by the crowd as containing no items of interest. Then the organisers can use machine learning to determine the level of disagreement for areas in which users believe they have seen something.
While the use of humans and machine learning algorithms to analyse imagery in the wake of a disaster or when an emergency is unfolding may be tempting, it is not a trivial effort. It needs to be carefully planned and deployed with the right combination of human and machine computation. We haven’t quite got the optimum formula yet.