Standing the test of time: Microsoft researcher honored for prescient machine learning work
Burges definitely succeeded: He ended up being part of a team that created the basis for the ranking system that is still used in Microsoft’s Bing search engine today.
At next week’s International Conference on Machine Learning, Burges, a research manager and principal researcher in Microsoft Research’s Machine Learning Intelligence Group, and his co-authors will receive the Test of Time Award for the 2005 paper that showed how that system works.
The system, called RankNet, was a breakthrough because it was much faster and more accurate than the previous system. Burges said it was able to make as much progress training the search engine system to rank results in one day, using one PC, as a previous system had done in several days using a cluster of computers.
RankNet also relies on neural networks. These are computer systems, loosely modeled after the human brain, that can be trained to perform desired tasks based on data labeled by humans. At the time, Burges said he believes they were the only researchers using that technology for a search engine ranking system.
In the last few years, the use of neural networks has exploded, with researchers using them to make great strides in everything from real-time translation to image captioning. The program co-chairs lauded Burges and his team for showing the power of these networks way back in in 2005, long before this most recent renaissance.
Burges said he was drawn to search engine ranking because it was a hot, highly competitive field in which both researchers and technology companies were competing to be the best.
“We knew there was a significant opportunity for having impact,” he said.
These days, Burges is working on a system that aims to teach machines to read and comprehend text, and to be able to answer questions about it.
It’s an ambitious project that, if successful, could have profound implications for artificial intelligence, or the development of systems that can see, hear and understand.
Burges said one thing he’s learned in his career is that it’s worth taking on longshot projects that you are passionate about.
“At some point you should just say, ‘What do I really want to accomplish?’” he said. “And then you should just do it.”
A number of Microsoft researchers are presenting other papers at the machine learning conference. They include:
- Approval Voting and Incentives in Crowdsourcing (611 KB .pdf)
- A Lower Bound for the Optimization of Finite Sums (232 KB .pdf)
- Surrogate Functions for Maximizing Precision at the Top (435 KB .pdf)
- Optimizing Non-decomposable Performance Measures: A Tale of Two Classes (469 KB .pdf)
- Classification with Low Rank and Missing Data (363 KB .pdf)
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization (426 KB .pdf)
- DiSCO: Distributed Optimization for Self-Concordant Empirical Loss (395 KB .pdf)
- A Linear Dynamical System Model for Text (335 KB .pdf)
- Multi-Task Learning for Subspace Segmentation (163 KB .pdf)
- On Greedy Maximization of Entropy (323 KB .pdf)
- Bimodal Modelling of Source Code and Natural Language (448 KB .pdf)
- Intersecting Faces: Non-negative Matrix Factorization With New Guarantees (452 KB .pdf)
- Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization (397 KB .pdf)
- Pushing the Limits of Affine Rank Minimization by Adapting Probabilistic PCA (374 KB .pdf)
Allison Linn is a senior writer at Microsoft Research. Follow Allison on Twitter.