Crowd-sourcing is increasingly being used for providing responses to polls and surveys on a large scale. Companies such as SurveyMonkey and Instant.ly are attempting to make crowd-sourced surveys commonplace, by making it easy to pose survey questions using an easy-to-use UI and retrieve results with a relatively low latency by having dedicated crowds at their disposal.
In this paper we argue that the ease with which polls can be created conceals an inherent difficulty: the survey maker does not know how many workers to hire for their survey. Asking too few may lead to samples sizes that “do not look impressive enough.” Asking too many clearly involves spending extra money, which can quickly become costly. Existing crowd-sourcing platforms do not provide help with this, neither, one can argue, do they have any incentive to do so.
We present a systematic approach to determining how many samples (i.e. workers) are required to achieve a certain level of statistical significance by showing how to automatically perform power analysis on questions of interest. Using a range of queries we demonstrate that power analysis can save significant amounts of money and time by concluding that frequently, only a handful of results is required to arrive at a certain decision.
We have implemented our approach within InterPoll, a programmable developer-driven polling system that uses a generic crowd (Mechanical Turk) as a back-end. Power analysis is automatically performed given both the structure of the query and the data that is being polled from the crowd. In all of our studies we are able to obtain statistically significant answers for under 30, with most costing less than 10. Our approach saves both time and money for the survey maker.