Microsoft Research Blog

The Microsoft Research blog shares stories of collaborations with computer scientists at academic and scientific institutions to advance technical innovations in computing, as well as related events, scholarships, and fellowships.

CIKM: “Slow Search with People” highlights welcoming keynote

October 18, 2015 | Posted by Microsoft Research Blog

Making search better by slowing it down will be explored in the welcoming keynote when CIKM convenes in Melbourne, Australia this week.

In “Slow Search: Improving Information Retrieval Using Human Assistance,” Principal Researcher Jaime Teevan will share some of the latest findings coming out of Microsoft Research’s Context, Learning, and User Experience for Search group.

The 24th ACM International Conference on Information and Knowledge Management from Oct. 19-23 brings together leading researchers in the disciplines of information retrieval, knowledge management and databases.

The keynote, to be delivered Tuesday, focuses “on how search engines can make use of additional time to employ a resource that is inherently slow: other people,” Teevan states in conference notes. “Using crowdsourcing and friendsourcing, I will highlight opportunities for search systems to support new search experiences with high quality result content that takes time to identify.”

The “Slow Search with People” initiative began in 2013 and has since been quietly gaining momentum in a bid to meld the scale and speed of machine intelligence with the quality and depth of analysis from real people.

The effort requires taking a somewhat contrarian approach to the ubiquitous task of finding information within the context of the desire for instant search results.  Incredibly, Teevan notes, that even a 100-200 millisecond delay is enough to trigger significant user dissatisfaction. It’s no accident that Google recently exchanged their long-cherished logo for a sans-serif version that loads faster across multiple devices, she notes.

Although strictly algorithmic search reliably delivers quick answers to simple questions, getting quality results to complex inquiries in economics, psychology, and other fields has proven more elusive.

But what if some of the laborious and repetitive attempts to gain better answers from search could be outsourced to the crowd or simply “friendsourced” on social media? It’s this prospect of freeing up an organization’s most valuable talent to focus on the truly hard stuff that helps propel “Slow Search” forward.

“Using the crowd is a good place to start because we can think about what we might be able to do algorithmically in the future (maybe even fast), like a giant Wizard-of-Oz experiment,” Teevan says.

It can help address “really complex things that require deep understanding …to explore things we can’t yet do algorithmically.”

The crowd, which now mostly refers to the “turkers” on Amazon’s Mechanical Turk (MTurk) have quickly become the dominant worker bees for tasks that machines can’t do well such as describing an image or choosing an ad preference. Much of the research presented throughout the week at CIKM could well rely on the results of tasks performed on MTurk.

The method has won many fans including cognitive psychologist and Microsoft researcher Dan Goldstein, who recently called it “one of the most important and beneficial innovations in the history of psychology,” according to the Financial Times. The speed of the research enables far more rapid progress and, because MTurk is so cheap, much larger samples can be used, Goldstein explained. But if the title of the Financial Times article by “Undercover Economist” Tim Harford is any indication – “Should we trust the young Turkers?,” some issues remain to be fully worked out.

Likewise, Teevan warns of the downsides of relying on crowdsourcing alone, pointing out that her team’s own research into crowdsourcing shows how results can be easily manipulated or distorted.  “What is the risk of the crowd being used in a coordinated manner to force a system to come up with the wrong outcome?,” Teevan asks.

If the risks of crowdsourcing alone prove too high, that’s where friendsourcing comes in.

Queries to actual friends on social media are more likely to generate highly personalized results and can even inspire moments of near heroism like the person Teevan cites who responded to a friend’s question by typing up his grandmother’s handwritten recipe, creating an “entirely new piece of content.”

That’s one way to beat the strictly algorithmic search engines.

teevan-headshot1Jaime Teevan is a Principal Researcher at Microsoft Research in the Context, Learning, and User Experience for Search (CLUES) group, and an Affiliate Assistant Professor in the Information School at the University of Washington. Working at the intersection of human computer interaction, information retrieval, and social media, she studies and supports people’s information seeking activities. Jaime is best known for her research on personalized search, and she developed the first personalized search algorithm used by Microsoft’s Bing search engine. The MIT Technology Review recognized Jaime’s pioneering work by naming her one of 2009’s “35 Innovators Under 35,” and the CRA-W honored her in 2014 with the Borg Early Career Award.

—John Kaiser, Research News

For more computer science research news, visit ResearchNews.com.