We address the challenge in crowdsourcing systems of incentivizing people to contribute to the best of their abilities. We focus on the class of crowdsourcing tasks where contributions are provided in pursuit of a single correct answer. This class includes citizen science efforts that seek input from people with identifying events and states in the world. We introduce a new payment rule, called consensus prediction rule, which uses the consensus of other workers to evaluate the report of a worker. We compare this rule to another payment rule which is an adaptation of the peer prediction rule introduced by Miller, Resnick and Zeckhauser to the domain of crowdsourcing. We show that while both rules promote truthful reporting, the consensus prediction rule has better fairness properties. We present analytical and empirical studies of the behavior of these rules on a noisy, real-world scenario where common knowledge assumptions do not necessarily hold.