Uncertainty in data occurs in domains ranging from natural science to medicine to computer science. By developing ways to include uncertainty in our information visualizations we can provide more accurate depictions of critical datasets. One hindrance to visualizing uncertainty is that we must first understand what uncertainty is and how it is expressed. We reviewed existing work from several domains on uncertainty and created a classification of uncertainty based on the literature. We empirically evaluated and improved upon our classification by conducting interviews with 18 people from several domains, who self-identified as working with uncertainty. Participants described what uncertainty looks like in their data and how they deal with it. We found commonalities in uncertainty across domains and believe our refined classification will help us in developing appropriate visualizations for each category of uncertainty.