This form contains a series of checkboxes that, when selected, will update the search results and the form fields. Currently selected items are under the "current selections" heading.
Susan Dumais was inducted into the first class of the ACM SIGIR Academy in recognition of her significant, cumulative contributions to the development of the field of information retrieval (IR). Susan is a leader in IR, and has shaped the discipline through significant contributions to research and innovation. She is joined by 24 other individuals around the globe: James Allan, Ricardo Baeza-Yates, Nicholas Belkin, Andrei Broder, Jamie Callan, William Cooper, W. Bruce Croft, Edward Fox,…
Reinforcement learning is about agents taking information from the world and learning a policy for interacting with it, so that they perform better. So, you can imagine a future where, every time you type on the keyboard, the keyboard learns to understand you better. Or every time you interact with some website, it understands better what your preferences are, so the world just starts working better and better at interacting with people.John Langford, Partner Research…
Sber and Microsoft Research summed up the results of a joint project launched in October 2019 to develop a unique AI system that offers the opportunity to teach robots to manipulate physical objects of unstable shape in almost the same way that humans do.
Countless companies use online recommendation engines to show customers products and experiences that match their interests. And yet, traditional machine learning models that predict what people might prefer are often based on data from past experience.
Last week at ICML 2020, Mikael Henaff, Akshay Krishnamurthy, John Langford and Dipendra Misra had a paper on a new reinforcement learning (RL) algorithm that solves three key problems in RL: (i) global exploration, (ii) decoding latent dynamics, and (iii) optimizing a given reward function. Their ICML poster is here.
At ICML 2020, Mikael Henaff, Akshay Krishnamurthy, John Langford and Dipendra Misra published a paper presenting a new reinforcement learning (RL) algorithm called HOMER that addresses three main problems in real-world RL problem: (i) exploration, (ii) decoding latent dynamics, and (iii) optimizing a given reward function. ArXiv version of the paper can be found here, and the ICML version would be released soon.
MSR’s New York City lab is home to some of the best reinforcement learning research on the planet but if you ask any of the researchers, they’ll tell you they’re very interested in getting it out of the lab and into the real world. One of those researchers is Dr. Akshay Krishnamurthy and today, he explains how his work on feedback-driven data collection and provably efficient reinforcement learning algorithms is helping to move the RL…
As the recently released GPT-3 and several recent studies demonstrate, racial bias, as well as bias based on gender, occupation, and religion, can be found in popular NLP language models. But a team of AI researchers wants the NLP bias research community to more closely examine and explore relationships between language, power, and social hierarchies like racism in their work. That’s one of three major recommendations for NLP bias researchers a recent study makes.
Dr. Siddhartha Sen is a Principal Researcher in MSR’s New York City lab, and his research interests are, if not impossible, at least impossible sounding: optimal decision making, universal data structures, and verifiably safe AI. Today, he tells us how he’s using reinforcement learning and HAIbrid algorithms to tap the best of both human and machine intelligence and develop AI that’s minimally disruptive, synergistic with human solutions, and safe.