In this paper, we present Human-Aided Computing, an approach that uses an electroencephalograph (EEG) device to measure the presence and outcomes of implicit cognitive processing, processing that users perform automatically and may not even be aware of. We describe a classification system and present results from two experiments as proof-of concept. Results from the first experiment showed that our system could classify whether a user was looking at an image of a face or not, even when the user was not explicitly trying to make this determination. Results from the second experiment extended this to animals and inanimate object categories as well, suggesting generality beyond face recognition. We further show that we can improve classification accuracies if we show images multiple times, potentially to multiple people, attaining well above 90% classification accuracies with even just ten presentations.