{"id":231283,"date":"2016-05-12T11:52:39","date_gmt":"2016-05-12T18:52:39","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&#038;p=231283"},"modified":"2025-08-06T12:01:09","modified_gmt":"2025-08-06T19:01:09","slug":"neuir2016","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/neuir2016\/","title":{"rendered":"Neu-IR: The SIGIR 2016 Workshop on Neural Information Retrieval"},"content":{"rendered":"\n\n<p><strong><<\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-content\/papercite-data\/pdf\/craswell-report-2016.pdf\" target=\"_blank\">Final Workshop Report<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0>><\/strong><\/p>\n<p><strong>Submission Deadline<\/strong>: May 16<br \/>\n<strong>Acceptance Notifications<\/strong>: June 6<br \/>\n<strong>Camera-ready Deadline<\/strong>: June 17<br \/>\n<strong>Workshop<\/strong>: July 21<\/p>\n<p>The first international Neu-IR (pronounced &#8220;<em>new<\/em> IR&#8221;) workshop on neural information retrieval will be hosted at <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/sigir.org\/sigir2016\/\" target=\"_blank\">SIGIR 2016 <span class=\"sr-only\"> (opens in new tab)<\/span><\/a>in Pisa, Tuscany, Italy on 21 July, 2016.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p style=\"text-align: justify\" align=\"justify\">(The final report on the workshop is available <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-content\/papercite-data\/pdf\/craswell-report-2016.pdf\" target=\"_blank\">here<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.)<\/p>\n<p style=\"text-align: justify\" align=\"justify\">In recent years, deep neural networks have yielded significant performance improvements on speech recognition and computer vision tasks, as well as led to exciting breakthroughs in novel application areas such as automatic voice translation, image captioning, and conversational agents. Despite demonstrating good performance on natural language processing (NLP) tasks, the performance of deep neural networks on IR tasks has had relatively less scrutiny.<\/p>\n<p style=\"text-align: justify\" align=\"justify\">The lack of many positive results in the area of information retrieval is partially due to the fact that IR tasks such as ranking are fundamentally different from NLP tasks, but also because the IR and neural network communities are only beginning to focus on the application of these techniques to core information retrieval problems. Given that deep learning has made such a big impact, first on speech processing and computer vision and now, increasingly, also on computational linguistics, it seems clear that deep learning will have a major impact on information retrieval and that this is an ideal time for a workshop in this area. Our focus is on the applicability of deep neural networks to information retrieval: demonstrating performance improvements on public or private information retrieval datasets, identifying key modelling challenges and best practices, and thinking about what insights deep neural network architectures give us about information retrieval problems.<\/p>\n<p style=\"text-align: justify\" align=\"justify\"><strong>Neu-IR 2016 <\/strong>will be a highly interactive full day workshop that will provide a forum for academic and industrial researchers working at the intersection of IR and neural networks. The purpose is to provide an opportunity for people to present new work and early results, compare notes on neural network toolkits, share best practices, and discuss the main challenges facing this line of research.<\/p>\n<p style=\"text-align: justify\" align=\"justify\">Please use the tabs above to navigate to see the program, the accepted papers and other details of this workshop.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p align=\"justify\">Neu-IR will be a highly interactive full day workshop, featuring a mix of presentation and interaction formats. The full schedule is presented below.<\/p>\n<p><strong>Morning Session I<\/strong><br \/>\n<span style=\"color: #999999\">09:00 \u2013 10:30<\/span><\/p>\n<p style=\"padding-left: 30px\">Welcome and opening announcements [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.slideshare.net\/BhaskarMitra3\/neuir-2016-opening-note\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nBhaskar Mitra<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Keynote: Recurrent Networks and Beyond [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.slideshare.net\/BhaskarMitra3\/recurrent-networks-and-beyond-by-tomas-mikolov\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nTomas Mikolov, Facebook AI Research<br \/>\n<span style=\"color: #999999\">45 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Paper: Query Expansion with Locally-Trained Word Embeddings [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.slideshare.net\/BhaskarMitra3\/query-expansion-with-locallytrained-word-embeddings-neuir-2016\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nFernando Diaz, Bhaskar Mitra and Nick Craswell<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Paper: Uncertainty in Neural Network Word Embedding Exploration of Potential Threshold [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/de.slideshare.net\/NavidRekabsaz\/uncertainty-in-neural-network-word-embedding-exploration-of-threshold-for-similarity\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nNavid Rekabsaz, Mihai Lupu and Allan Hanbury<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p><strong>Coffee Break<\/strong><br \/>\n<span style=\"color: #999999\">10:30 \u2013 11:00<\/span><\/p>\n<p><strong>Morning Session II<\/strong><br \/>\n<span style=\"color: #999999\">11:00 \u2013 12:30<\/span><\/p>\n<p style=\"padding-left: 30px\">Lessons from the Trenches [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.slideshare.net\/BhaskarMitra3\/neuir-2016-lessons-from-the-trenches\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\n<span style=\"color: #999999\">45 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Poster presentations<br \/>\n<span style=\"color: #999999\">45 mins<\/span><\/p>\n<p><strong>Lunch Break<\/strong><br \/>\n<span style=\"color: #999999\">12:30 \u2013 14:00<\/span><\/p>\n<p><strong>Afternoon Session I<\/strong><br \/>\n<span style=\"color: #999999\">14:00 \u2013 15:30<\/span><\/p>\n<p style=\"padding-left: 30px\">Keynote: Does IR Need Deep Learning? [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.hangli-hl.com\/uploads\/3\/4\/4\/6\/34465961\/does_ir_need_deep_learning.pdf\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nHang Li, Huawei Technologies<br \/>\n<span style=\"color: #999999\">45 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Paper: Modelling User Preferences using Word Embeddings for Context-Aware Venue Recommendation [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/drive.google.com\/file\/d\/0BzMK-0IWc2LeU2gzRDNCX0owd2M\/view\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nJarana Manotumruksa, Craig Macdonald and Iadh Ounis<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Paper: A Study of MatchPyramid Models on Ad-hoc Retrieval [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.bigdatalab.ac.cn\/~gjf\/papers\/2016\/NEUIR_talk.pdf\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nLiang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu and Xueqi Cheng<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Paper: Emulating Human Conversations using Convolutional Neural Network-based IR [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.slideshare.net\/secret\/t6TIb6uEDuMgfB\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nAbhay Prakash, Chris Brockett and Puneet Agrawal<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p><strong>Coffee Break<\/strong><br \/>\n<span style=\"color: #999999\">15:30 \u2013 16:00<\/span><\/p>\n<p><strong>Afternoon Session II<\/strong><br \/>\n<span style=\"color: #999999\">16:00 \u2013 17:45<\/span><\/p>\n<p style=\"padding-left: 30px\">Breakout session<br \/>\n<span style=\"color: #999999\">45 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Breakout session retrospective<br \/>\n<span style=\"color: #999999\">45 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Concluding remarks<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h3 align=\"justify\">Recurrent Networks and Beyond<\/h3>\n<p align=\"justify\">Tomas Mikolov, Facebook AI Research<\/p>\n<p align=\"justify\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/research.facebook.com\/tomas-mikolov\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-242012 size-thumbnail\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/mikolov-150x150.png\" alt=\"mikolov\" width=\"150\" height=\"150\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/mikolov-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/mikolov-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/mikolov.png 250w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>Abstract: In this talk, I will give a brief overview of recurrent networks and their applications. I will then present several extensions that aim to help these powerful models to learn more patterns from training data. This will include a simple modification of the architecture that allows to capture longer context information, and an architecture that allows to learn complex algorithmic patterns. The talk will be concluded with a discussion of a long term research plan on how to advance machine learning techniques towards development of artificial intelligence.<\/p>\n<p align=\"justify\">Bio: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/research.facebook.com\/tomas-mikolov\" target=\"_blank\">Tomas Mikolov<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0is a research scientist at Facebook AI Research since May 2014. Previously\u00a0he has been a member of the Google Brain team, where\u00a0he developed and implemented efficient algorithms for computing distributed representations of words (word2vec project). He obtained his PhD from Brno University of Technology (Czech Republic) for\u00a0his work on recurrent neural network based language models (RNNLM).\u00a0His long term research goal is to develop intelligent machines capable of learning and communication with people using natural language.<\/p>\n<p>&nbsp;<\/p>\n<h3 align=\"justify\">Does IR Need Deep Learning?<\/h3>\n<p align=\"justify\">Hang Li, Huawei Technologies<\/p>\n<p align=\"justify\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.hangli-hl.com\/\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-242015 size-thumbnail\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/HangLi-150x150.jpg\" alt=\"HangLi\" width=\"150\" height=\"150\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/HangLi-150x150.jpg 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/HangLi-180x180.jpg 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/HangLi-360x360.jpg 360w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>Abstract: In recent years, deep learning has become the key technology of state-of-the-art systems in many areas of computer science, such as computer vision, speech processing, and natural language processing. A question naturally arises, that is, can deep learning also bring breakthrough into IR (information retrieval)? In fact, there has been a large amount of effort made to address the question and significant progress has been achieved. Yet there is still doubt about whether it is the case.<\/p>\n<p align=\"justify\">In this talk, I will argue that, if we take a broad view on IR, then we arrive at a conclusion that deep learning can indeed greatly boost IR. Actually it has been observed that deep learning can make great improvements on some hard problems in IR such as question answering from knowledge base, image retrieval, etc; on the other hand, for some traditional IR tasks, in some sense easy tasks, such as document retrieval, the improvements might not be so notable. I will introduce some of the work on deep learning for IR conducted at Huawei Noah\u2019s Ark Lab, to support my claim. I will also make discussions on the strength and limitation of deep learning, IR problems on which deep learning can potentially make significant contributions, as well as future directions of research on IR.<\/p>\n<p align=\"justify\">Bio: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.hangli-hl.com\/\">Hang Li<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> is director of the Noah\u2019s Ark Lab of Huawei Technologies, adjunct professors of Peking University and Nanjing University. He is ACM Distinguished Scientist. His research areas include information retrieval, natural language processing, statistical machine learning, and data mining. Hang graduated from Kyoto University in 1988 and earned his PhD from the University of Tokyo in 1998. He worked at the NEC lab as researcher during 1991 and 2001, and Microsoft Research Asia as senior researcher and research manager during 2001 and 2012. He joined Huawei Technologies in 2012. Hang has published three technical books, and more than 120 technical papers at top international conferences including SIGIR, WWW, WSDM, ACL, EMNLP, ICML, NIPS, SIGKDD, AAAI, IJCAI, and top international journals including CL, NLE, JMLR, TOIS, IRJ, IPM, TKDE, TWEB, TIST. He and his colleagues\u2019 papers received the SIGKDD\u201908 best application paper award, the SIGIR\u201908 best student paper award, the ACL\u201912 best student paper award. Hang worked on the development of several products such as Microsoft SQL Server 2005, Office 2007, Live Search 2008, Bing 2009, Office 2010, Bing 2010, Office 2012, Huawei Smartphones 2014. He has 42 granted US patents. Hang is also very active in the research communities and has served or is serving top international conferences as PC chair, Senior PC member, or PC member, including SIGIR, WWW, WSDM, ACL, NACL, EMNLP, NIPS, SIGKDD, ICDM, IJCAI, ACML, and top international journals as associate editor or editorial board member, including CL, IRJ, TIST, JASIST, JCST.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p align=\"justify\">We had 27 submissions (excluding three incomplete submissions). Every paper was reviewed by at least two members of the program committee and finally 19 submission were accepted\u00a0(acceptance rate of 73%). Among the accepted papers, there were a few popular themes. 8 papers were related to learning and applications of word embeddings. 10 papers focused on applications of deep neural networks for different IR tasks. The accepted papers also covered a broad range\u00a0of tasks, including question\/answering, proactive IR, knowledge-based IR,\u00a0conversational models and\u00a0text-to-image, but document ranking was a popular choice with 7 papers using it as the evaluation task.\u00a0The word cloud summary (generated using <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.wordle.net\/\">http:\/\/www.wordle.net<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>)\u00a0of the abstracts of the accepted papers highlights additional themes across all the submissions.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-241823 size-full\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/wordcloud-abstracts.png\" alt=\"wordcloud-abstracts\" width=\"812\" height=\"527\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/wordcloud-abstracts.png 812w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/wordcloud-abstracts-300x195.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/wordcloud-abstracts-768x498.png 768w\" sizes=\"auto, (max-width: 812px) 100vw, 812px\" \/><\/p>\n<p align=\"justify\">Geographically, the accepted papers (based on the first author) ranged from\u00a09 countries and 3 continents (FR: 4, IN: 4, CN: 2, DK: 2, UK: 2, US: 2, AT: 1, FI: 1 and IT: 1). Based on the first author&#8217;s affiliation, 2 of the accepted papers came from the industry and the rest from academia.<\/p>\n<p>&nbsp;<\/p>\n<p>The full list of accepted papers is below:<\/p>\n<p>An empirical study on large scale text classification with skip-gram embeddings\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.06623\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Georgios Balikas and Massih-Reza Amini<\/span><\/p>\n<p>Deep Feature Fusion Network for Answer Quality Prediction in Community Question Answering <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07103\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Sai Praneeth Suggu, Kushwanth N. Goutham T, Manoj K. Chinnakotla and Manish Shrivastava<\/span><\/p>\n<p>Selective Term Proximity Scoring Via BP-ANN\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07188\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Ju Yang, Rebecca Stones, Gang Wang and Xiaoguang Liu<\/span><\/p>\n<p>Adaptability of Neural Networks on Varying Granularity IR Tasks\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07565\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Daniel Cohen, Qingyao Ai and W. Bruce Croft<\/span><\/p>\n<p>Emulating Human Conversations using Convolutional Neural Network-based IR\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07056\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Abhay Prakash, Chris Brockett and Puneet Agrawal<\/span><\/p>\n<p>A Study of MatchPyramid Models on Ad-hoc Retrieval\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.04648\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu and Xueqi Cheng<\/span><\/p>\n<p>Learning text representation using recurrent convolutional neural network with highway layers\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.06905\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Ying Wen, Weinan Zhang, Rui Luo and Jun Wang<\/span><\/p>\n<p>Toward Word Embedding for Personalized Information Retrieval\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.06991\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Nawal Ould Amer, Philippe Mulhem and Mathias G\u00e9ry<\/span><\/p>\n<p>Toward a Deep Neural Approach for Knowledge-Based IR\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07211\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Gia-Hung Nguyen, Lynda Tamine, Laure Soulier and Nathalie Bricon-Souf<\/span><\/p>\n<p>Query Expansion with Locally-Trained Word Embeddings\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1605.07891\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Fernando Diaz, Bhaskar Mitra and Nick Craswell<\/span><\/p>\n<p>LSTM-Based Predictions for Proactive Information Retrieval\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.06137\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Petri Luukkonen, Markus Koskela and Patrik Flor\u00e9en<\/span><\/p>\n<p>Picture It In Your Mind: Generating High Level Visual Representations From Textual Descriptions <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07287\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Fabio Carrara, Andrea Esuli, Tiziano Fagni, Fabrizio Falchi and Alejandro Moreo Fern\u00e1ndez<\/span><\/p>\n<p>Uncertainty in Neural Network Word Embedding Exploration of Potential Threshold\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.06086\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Navid Rekabsaz, Mihai Lupu and Allan Hanbury<\/span><\/p>\n<p>Deep Learning Relevance: Creating Relevant Information (as Opposed to Retrieving it) <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07660\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Christina Lioma, Birger Larsen, Casper Petersen and Jakob Grue Simonsen<\/span><\/p>\n<p>Learning Dynamic Classes of Events using Stacked Multilayer Perceptron Networks\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07219\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Nattiya Kanhabua, Huamin Ren and Thomas B. Moeslund<\/span><\/p>\n<p>Representing Documents and Queries as Sets of Word Embedded Vectors for Information Retrieval\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07869\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Debasis Ganguly, Dwaipayan Roy, Mandar Mitra and Gareth Jones<\/span><\/p>\n<p>Using Word Embeddings for Automatic Query Expansion\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07608\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Dwaipayan Roy, Debjyoti Paul and Mandar Mitra<\/span><\/p>\n<p>Modelling User Preferences using Word Embeddings for Context-Aware Venue Recommendation\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07828\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Jarana Manotumruksa, Craig Macdonald and Iadh Ounis<\/span><\/p>\n<p>Using Word Embeddings in Twitter Election Classification\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07006\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Xiao Yang, Craig Macdonald and Iadh Ounis<\/span><\/p>\n<p>&nbsp;<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p style=\"text-align: justify\">The Lessons from the Trenches will be a series of &#8220;lightning talks&#8221; by\u00a0researchers\u00a0who are actively working in the intersection of information retrieval and neural networks who\u00a0want to share their personal insights and learning with the broader community. In particular, we are hoping to hear about,<\/p>\n<ul style=\"text-align: justify\">\n<li>Key challenges faced in making neural models work effectively for IR tasks<\/li>\n<li>Best practices and related insights<\/li>\n<li>Negative results<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p style=\"text-align: justify\">The following people have signed-up to present at this session.<\/p>\n<ul>\n<li style=\"text-align: justify\">Sergey Nikolenko<\/li>\n<li style=\"text-align: justify\">Qingyao Ai<\/li>\n<li style=\"text-align: justify\">Debasis Ganguly<\/li>\n<li style=\"text-align: justify\">Alessandro Moschitti<\/li>\n<li style=\"text-align: justify\">Jun Xu<\/li>\n<li style=\"text-align: justify\">Grady Simon<\/li>\n<li style=\"text-align: justify\">Alexey Borisov<\/li>\n<li style=\"text-align: justify\">Bhaska Mitra<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p align=\"justify\">We solicit submission of papers of two to six pages (excluding references), representing reports of original research, preliminary research results, proposals for new work, descriptions of neural network based toolkits tailored for IR, and position papers. Papers presented at the workshop will be required to be uploaded to arXiv.org but will be considered <strong>non-archival<\/strong>, and may be submitted elsewhere (modified or not), although the workshop site will maintain a link to the arXiv versions. This makes the workshop a forum for the presentation and discussion of current work, without preventing the work from being published elsewhere.<\/p>\n<p>We are interested in submissions relevant to the following main themes:<\/p>\n<ol>\n<li>The application of neural network models in IR tasks, including but not limited to:\n<ul>\n<li>Full text document retrieval, passage retrieval, question answering<\/li>\n<li>Web search, searching social media, distributed information retrieval, entity ranking<\/li>\n<li>Learning to rank combined with neural network based representation learning<\/li>\n<li>User and task modelling, personalized search, diversity<\/li>\n<li>Query formulation assistance, query recommendation, conversational search<\/li>\n<li>Multimedia retrieval<\/li>\n<\/ul>\n<\/li>\n<li>Fundamental modelling challenges faced in such applications, including but not limited to:\n<ul>\n<li>Learning dense representations for long documents<\/li>\n<li>Dealing with rare queries and rare words<\/li>\n<li>Modelling text at different granularities (character, word, passage, document)<\/li>\n<li>Compositionality of vector representations<\/li>\n<li>Jointly modelling queries, documents, entities and other structured\/knowledge data<\/li>\n<\/ul>\n<\/li>\n<li>Best practices for research and development in the area, dealing with concerns such as:\n<ul>\n<li>Finding sufficient publicly-available training data<\/li>\n<li>Baselines, test data, avoiding overfitting<\/li>\n<li>Neural network toolkits<\/li>\n<li>Real-world use cases, deployment at scale<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p align=\"justify\">All papers will be peer reviewed (single-blind) by the program committee and judged by their relevance to the workshop, especially to the main themes identified above, and their potential to generate discussion. All submissions must be formatted according to the ACM SIG proceedings template. Please note that at least one of the authors of each accepted paper must register for the workshop and present the paper in-person.<\/p>\n<p>Submission url: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/easychair.org\/conferences\/?conf=neuir2016\" target=\"_blank\" rel=\"nofollow\">https:\/\/easychair.org\/conferences\/?conf=neuir2016<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p><strong>Organizers<\/strong><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nickcr\">Nick Craswell<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft, Bellevue, US<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/ciir.cs.umass.edu\/personnel\/croft.html\">W. Bruce Croft<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Massachusetts, Amherst, US<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.bigdatalab.ac.cn\/~gjf\">Jiafeng Guo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Chinese Academy of Sciences, Beijing, China<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/bmitra\">Bhaskar Mitra<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft, Cambridge, UK<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/staff.fnwi.uva.nl\/m.derijke\">Maarten de Rijke<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Amsterdam, Amsterdam, The Netherlands<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Program Committee<\/strong><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.da.inf.ethz.ch\/people\/CarstenEickhoff\/\">Carsten Eickhoff<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, ETH Zurich<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.computing.dcu.ie\/~dganguly\/\">Debasis Ganguly<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Dublin City University<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kahofman\/\">Katja Hoffman<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft Research<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.hangli-hl.com\/\">Hang Li<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Huawei Technologies<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.cs.nyu.edu\/~mirowski\/\">Piotr Mirowski<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Google DeepMind<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/disi.unitn.it\/moschitti\/\">Alessandro Moschitti<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Qatar Computing Research Institute, HKBU<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/events.yandex.com\/people\/11561\/\">Pavel Serdyukov<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Yandex<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/pomino.isti.cnr.it\/~silvestr\/\">Fabrizio Silvestri<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Yahoo Labs<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www-etud.iro.umontreal.ca\/~sordonia\/\">Alessandro Sordoni<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Montreal<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The first international Neu-IR (pronounced &#8220;new IR&#8221;) workshop on neural information retrieval will be hosted at SIGIR 2016 in Pisa, Tuscany, Italy on 21 July, 2016.<\/p>\n","protected":false},"featured_media":241994,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2016-07-21","msr_enddate":"2016-07-21","msr_location":"Pisa, Italy","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":false,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[13555],"msr-region":[239178],"msr-event-type":[197941],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-231283","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-search-information-retrieval","msr-region-europe","msr-event-type-conferences","msr-locale-en_us"],"msr_about":"<!-- wp:msr\/event-details {\"title\":\"Neu-IR: The SIGIR 2016 Workshop on Neural Information Retrieval\",\"backgroundColor\":\"grey\",\"image\":{\"id\":241994,\"url\":\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/neuir-2016-logo-large-wpbg-2.png\",\"alt\":\"\"}} \/-->\n\n<!-- wp:msr\/content-tabs --><!-- wp:msr\/content-tab {\"title\":\"Summary\"} --><!-- wp:freeform --><p><strong>&lt;&lt;\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-content\/papercite-data\/pdf\/craswell-report-2016.pdf\" target=\"_blank\">Final Workshop Report<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0&gt;&gt;<\/strong><\/p>\n<p><strong>Submission Deadline<\/strong>: May 16<br \/>\n<strong>Acceptance Notifications<\/strong>: June 6<br \/>\n<strong>Camera-ready Deadline<\/strong>: June 17<br \/>\n<strong>Workshop<\/strong>: July 21<\/p>\n<p>The first international Neu-IR (pronounced &#8220;<em>new<\/em> IR&#8221;) workshop on neural information retrieval will be hosted at <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/sigir.org\/sigir2016\/\" target=\"_blank\">SIGIR 2016 <span class=\"sr-only\"> (opens in new tab)<\/span><\/a>in Pisa, Tuscany, Italy on 21 July, 2016.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p style=\"text-align: justify\" align=\"justify\">(The final report on the workshop is available <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-content\/papercite-data\/pdf\/craswell-report-2016.pdf\" target=\"_blank\">here<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.)<\/p>\n<p style=\"text-align: justify\" align=\"justify\">In recent years, deep neural networks have yielded significant performance improvements on speech recognition and computer vision tasks, as well as led to exciting breakthroughs in novel application areas such as automatic voice translation, image captioning, and conversational agents. Despite demonstrating good performance on natural language processing (NLP) tasks, the performance of deep neural networks on IR tasks has had relatively less scrutiny.<\/p>\n<p style=\"text-align: justify\" align=\"justify\">The lack of many positive results in the area of information retrieval is partially due to the fact that IR tasks such as ranking are fundamentally different from NLP tasks, but also because the IR and neural network communities are only beginning to focus on the application of these techniques to core information retrieval problems. Given that deep learning has made such a big impact, first on speech processing and computer vision and now, increasingly, also on computational linguistics, it seems clear that deep learning will have a major impact on information retrieval and that this is an ideal time for a workshop in this area. Our focus is on the applicability of deep neural networks to information retrieval: demonstrating performance improvements on public or private information retrieval datasets, identifying key modelling challenges and best practices, and thinking about what insights deep neural network architectures give us about information retrieval problems.<\/p>\n<p style=\"text-align: justify\" align=\"justify\"><strong>Neu-IR 2016 <\/strong>will be a highly interactive full day workshop that will provide a forum for academic and industrial researchers working at the intersection of IR and neural networks. The purpose is to provide an opportunity for people to present new work and early results, compare notes on neural network toolkits, share best practices, and discuss the main challenges facing this line of research.<\/p>\n<p style=\"text-align: justify\" align=\"justify\">Please use the tabs above to navigate to see the program, the accepted papers and other details of this workshop.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Program\"} --><!-- wp:freeform --><p align=\"justify\">Neu-IR will be a highly interactive full day workshop, featuring a mix of presentation and interaction formats. The full schedule is presented below.<\/p>\n<p><strong>Morning Session I<\/strong><br \/>\n<span style=\"color: #999999\">09:00 \u2013 10:30<\/span><\/p>\n<p style=\"padding-left: 30px\">Welcome and opening announcements [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.slideshare.net\/BhaskarMitra3\/neuir-2016-opening-note\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nBhaskar Mitra<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Keynote: Recurrent Networks and Beyond [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.slideshare.net\/BhaskarMitra3\/recurrent-networks-and-beyond-by-tomas-mikolov\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nTomas Mikolov, Facebook AI Research<br \/>\n<span style=\"color: #999999\">45 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Paper: Query Expansion with Locally-Trained Word Embeddings [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.slideshare.net\/BhaskarMitra3\/query-expansion-with-locallytrained-word-embeddings-neuir-2016\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nFernando Diaz, Bhaskar Mitra and Nick Craswell<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Paper: Uncertainty in Neural Network Word Embedding Exploration of Potential Threshold [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/de.slideshare.net\/NavidRekabsaz\/uncertainty-in-neural-network-word-embedding-exploration-of-threshold-for-similarity\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nNavid Rekabsaz, Mihai Lupu and Allan Hanbury<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p><strong>Coffee Break<\/strong><br \/>\n<span style=\"color: #999999\">10:30 \u2013 11:00<\/span><\/p>\n<p><strong>Morning Session II<\/strong><br \/>\n<span style=\"color: #999999\">11:00 \u2013 12:30<\/span><\/p>\n<p style=\"padding-left: 30px\">Lessons from the Trenches [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.slideshare.net\/BhaskarMitra3\/neuir-2016-lessons-from-the-trenches\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\n<span style=\"color: #999999\">45 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Poster presentations<br \/>\n<span style=\"color: #999999\">45 mins<\/span><\/p>\n<p><strong>Lunch Break<\/strong><br \/>\n<span style=\"color: #999999\">12:30 \u2013 14:00<\/span><\/p>\n<p><strong>Afternoon Session I<\/strong><br \/>\n<span style=\"color: #999999\">14:00 \u2013 15:30<\/span><\/p>\n<p style=\"padding-left: 30px\">Keynote: Does IR Need Deep Learning? [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.hangli-hl.com\/uploads\/3\/4\/4\/6\/34465961\/does_ir_need_deep_learning.pdf\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nHang Li, Huawei Technologies<br \/>\n<span style=\"color: #999999\">45 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Paper: Modelling User Preferences using Word Embeddings for Context-Aware Venue Recommendation [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/drive.google.com\/file\/d\/0BzMK-0IWc2LeU2gzRDNCX0owd2M\/view\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nJarana Manotumruksa, Craig Macdonald and Iadh Ounis<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Paper: A Study of MatchPyramid Models on Ad-hoc Retrieval [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.bigdatalab.ac.cn\/~gjf\/papers\/2016\/NEUIR_talk.pdf\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nLiang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu and Xueqi Cheng<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Paper: Emulating Human Conversations using Convolutional Neural Network-based IR [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.slideshare.net\/secret\/t6TIb6uEDuMgfB\">slides<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<br \/>\nAbhay Prakash, Chris Brockett and Puneet Agrawal<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p><strong>Coffee Break<\/strong><br \/>\n<span style=\"color: #999999\">15:30 \u2013 16:00<\/span><\/p>\n<p><strong>Afternoon Session II<\/strong><br \/>\n<span style=\"color: #999999\">16:00 \u2013 17:45<\/span><\/p>\n<p style=\"padding-left: 30px\">Breakout session<br \/>\n<span style=\"color: #999999\">45 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Breakout session retrospective<br \/>\n<span style=\"color: #999999\">45 mins<\/span><\/p>\n<p style=\"padding-left: 30px\">Concluding remarks<br \/>\n<span style=\"color: #999999\">15 mins<\/span><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Keynotes\"} --><!-- wp:freeform --><h3 align=\"justify\">Recurrent Networks and Beyond<\/h3>\n<p align=\"justify\">Tomas Mikolov, Facebook AI Research<\/p>\n<p align=\"justify\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/research.facebook.com\/tomas-mikolov\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-242012 size-thumbnail\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/mikolov-150x150.png\" alt=\"mikolov\" width=\"150\" height=\"150\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/mikolov-150x150.png 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/mikolov-180x180.png 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/mikolov.png 250w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>Abstract: In this talk, I will give a brief overview of recurrent networks and their applications. I will then present several extensions that aim to help these powerful models to learn more patterns from training data. This will include a simple modification of the architecture that allows to capture longer context information, and an architecture that allows to learn complex algorithmic patterns. The talk will be concluded with a discussion of a long term research plan on how to advance machine learning techniques towards development of artificial intelligence.<\/p>\n<p align=\"justify\">Bio: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/research.facebook.com\/tomas-mikolov\" target=\"_blank\">Tomas Mikolov<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0is a research scientist at Facebook AI Research since May 2014. Previously\u00a0he has been a member of the Google Brain team, where\u00a0he developed and implemented efficient algorithms for computing distributed representations of words (word2vec project). He obtained his PhD from Brno University of Technology (Czech Republic) for\u00a0his work on recurrent neural network based language models (RNNLM).\u00a0His long term research goal is to develop intelligent machines capable of learning and communication with people using natural language.<\/p>\n<p>&nbsp;<\/p>\n<h3 align=\"justify\">Does IR Need Deep Learning?<\/h3>\n<p align=\"justify\">Hang Li, Huawei Technologies<\/p>\n<p align=\"justify\"><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.hangli-hl.com\/\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-242015 size-thumbnail\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/HangLi-150x150.jpg\" alt=\"HangLi\" width=\"150\" height=\"150\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/HangLi-150x150.jpg 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/HangLi-180x180.jpg 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/HangLi-360x360.jpg 360w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>Abstract: In recent years, deep learning has become the key technology of state-of-the-art systems in many areas of computer science, such as computer vision, speech processing, and natural language processing. A question naturally arises, that is, can deep learning also bring breakthrough into IR (information retrieval)? In fact, there has been a large amount of effort made to address the question and significant progress has been achieved. Yet there is still doubt about whether it is the case.<\/p>\n<p align=\"justify\">In this talk, I will argue that, if we take a broad view on IR, then we arrive at a conclusion that deep learning can indeed greatly boost IR. Actually it has been observed that deep learning can make great improvements on some hard problems in IR such as question answering from knowledge base, image retrieval, etc; on the other hand, for some traditional IR tasks, in some sense easy tasks, such as document retrieval, the improvements might not be so notable. I will introduce some of the work on deep learning for IR conducted at Huawei Noah\u2019s Ark Lab, to support my claim. I will also make discussions on the strength and limitation of deep learning, IR problems on which deep learning can potentially make significant contributions, as well as future directions of research on IR.<\/p>\n<p align=\"justify\">Bio: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.hangli-hl.com\/\">Hang Li<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> is director of the Noah\u2019s Ark Lab of Huawei Technologies, adjunct professors of Peking University and Nanjing University. He is ACM Distinguished Scientist. His research areas include information retrieval, natural language processing, statistical machine learning, and data mining. Hang graduated from Kyoto University in 1988 and earned his PhD from the University of Tokyo in 1998. He worked at the NEC lab as researcher during 1991 and 2001, and Microsoft Research Asia as senior researcher and research manager during 2001 and 2012. He joined Huawei Technologies in 2012. Hang has published three technical books, and more than 120 technical papers at top international conferences including SIGIR, WWW, WSDM, ACL, EMNLP, ICML, NIPS, SIGKDD, AAAI, IJCAI, and top international journals including CL, NLE, JMLR, TOIS, IRJ, IPM, TKDE, TWEB, TIST. He and his colleagues\u2019 papers received the SIGKDD\u201908 best application paper award, the SIGIR\u201908 best student paper award, the ACL\u201912 best student paper award. Hang worked on the development of several products such as Microsoft SQL Server 2005, Office 2007, Live Search 2008, Bing 2009, Office 2010, Bing 2010, Office 2012, Huawei Smartphones 2014. He has 42 granted US patents. Hang is also very active in the research communities and has served or is serving top international conferences as PC chair, Senior PC member, or PC member, including SIGIR, WWW, WSDM, ACL, NACL, EMNLP, NIPS, SIGKDD, ICDM, IJCAI, ACML, and top international journals as associate editor or editorial board member, including CL, IRJ, TIST, JASIST, JCST.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Accepted Papers\"} --><!-- wp:freeform --><p align=\"justify\">We had 27 submissions (excluding three incomplete submissions). Every paper was reviewed by at least two members of the program committee and finally 19 submission were accepted\u00a0(acceptance rate of 73%). Among the accepted papers, there were a few popular themes. 8 papers were related to learning and applications of word embeddings. 10 papers focused on applications of deep neural networks for different IR tasks. The accepted papers also covered a broad range\u00a0of tasks, including question\/answering, proactive IR, knowledge-based IR,\u00a0conversational models and\u00a0text-to-image, but document ranking was a popular choice with 7 papers using it as the evaluation task.\u00a0The word cloud summary (generated using <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.wordle.net\/\">http:\/\/www.wordle.net<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>)\u00a0of the abstracts of the accepted papers highlights additional themes across all the submissions.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-241823 size-full\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/wordcloud-abstracts.png\" alt=\"wordcloud-abstracts\" width=\"812\" height=\"527\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/wordcloud-abstracts.png 812w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/wordcloud-abstracts-300x195.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/wordcloud-abstracts-768x498.png 768w\" sizes=\"auto, (max-width: 812px) 100vw, 812px\" \/><\/p>\n<p align=\"justify\">Geographically, the accepted papers (based on the first author) ranged from\u00a09 countries and 3 continents (FR: 4, IN: 4, CN: 2, DK: 2, UK: 2, US: 2, AT: 1, FI: 1 and IT: 1). Based on the first author&#8217;s affiliation, 2 of the accepted papers came from the industry and the rest from academia.<\/p>\n<p>&nbsp;<\/p>\n<p>The full list of accepted papers is below:<\/p>\n<p>An empirical study on large scale text classification with skip-gram embeddings\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.06623\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Georgios Balikas and Massih-Reza Amini<\/span><\/p>\n<p>Deep Feature Fusion Network for Answer Quality Prediction in Community Question Answering <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07103\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Sai Praneeth Suggu, Kushwanth N. Goutham T, Manoj K. Chinnakotla and Manish Shrivastava<\/span><\/p>\n<p>Selective Term Proximity Scoring Via BP-ANN\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07188\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Ju Yang, Rebecca Stones, Gang Wang and Xiaoguang Liu<\/span><\/p>\n<p>Adaptability of Neural Networks on Varying Granularity IR Tasks\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07565\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Daniel Cohen, Qingyao Ai and W. Bruce Croft<\/span><\/p>\n<p>Emulating Human Conversations using Convolutional Neural Network-based IR\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07056\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Abhay Prakash, Chris Brockett and Puneet Agrawal<\/span><\/p>\n<p>A Study of MatchPyramid Models on Ad-hoc Retrieval\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.04648\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu and Xueqi Cheng<\/span><\/p>\n<p>Learning text representation using recurrent convolutional neural network with highway layers\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.06905\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Ying Wen, Weinan Zhang, Rui Luo and Jun Wang<\/span><\/p>\n<p>Toward Word Embedding for Personalized Information Retrieval\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.06991\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Nawal Ould Amer, Philippe Mulhem and Mathias G\u00e9ry<\/span><\/p>\n<p>Toward a Deep Neural Approach for Knowledge-Based IR\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07211\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Gia-Hung Nguyen, Lynda Tamine, Laure Soulier and Nathalie Bricon-Souf<\/span><\/p>\n<p>Query Expansion with Locally-Trained Word Embeddings\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/1605.07891\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Fernando Diaz, Bhaskar Mitra and Nick Craswell<\/span><\/p>\n<p>LSTM-Based Predictions for Proactive Information Retrieval\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.06137\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Petri Luukkonen, Markus Koskela and Patrik Flor\u00e9en<\/span><\/p>\n<p>Picture It In Your Mind: Generating High Level Visual Representations From Textual Descriptions <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07287\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Fabio Carrara, Andrea Esuli, Tiziano Fagni, Fabrizio Falchi and Alejandro Moreo Fern\u00e1ndez<\/span><\/p>\n<p>Uncertainty in Neural Network Word Embedding Exploration of Potential Threshold\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.06086\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Navid Rekabsaz, Mihai Lupu and Allan Hanbury<\/span><\/p>\n<p>Deep Learning Relevance: Creating Relevant Information (as Opposed to Retrieving it) <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07660\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Christina Lioma, Birger Larsen, Casper Petersen and Jakob Grue Simonsen<\/span><\/p>\n<p>Learning Dynamic Classes of Events using Stacked Multilayer Perceptron Networks\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07219\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Nattiya Kanhabua, Huamin Ren and Thomas B. Moeslund<\/span><\/p>\n<p>Representing Documents and Queries as Sets of Word Embedded Vectors for Information Retrieval\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07869\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Debasis Ganguly, Dwaipayan Roy, Mandar Mitra and Gareth Jones<\/span><\/p>\n<p>Using Word Embeddings for Automatic Query Expansion\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07608\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Dwaipayan Roy, Debjyoti Paul and Mandar Mitra<\/span><\/p>\n<p>Modelling User Preferences using Word Embeddings for Context-Aware Venue Recommendation\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07828\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Jarana Manotumruksa, Craig Macdonald and Iadh Ounis<\/span><\/p>\n<p>Using Word Embeddings in Twitter Election Classification\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/arxiv.org\/abs\/1606.07006\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<span style=\"color: #999999\">Xiao Yang, Craig Macdonald and Iadh Ounis<\/span><\/p>\n<p>&nbsp;<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Lessons from the Trenches\"} --><!-- wp:freeform --><p style=\"text-align: justify\">The Lessons from the Trenches will be a series of &#8220;lightning talks&#8221; by\u00a0researchers\u00a0who are actively working in the intersection of information retrieval and neural networks who\u00a0want to share their personal insights and learning with the broader community. In particular, we are hoping to hear about,<\/p>\n<ul style=\"text-align: justify\">\n<li>Key challenges faced in making neural models work effectively for IR tasks<\/li>\n<li>Best practices and related insights<\/li>\n<li>Negative results<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p style=\"text-align: justify\">The following people have signed-up to present at this session.<\/p>\n<ul>\n<li style=\"text-align: justify\">Sergey Nikolenko<\/li>\n<li style=\"text-align: justify\">Qingyao Ai<\/li>\n<li style=\"text-align: justify\">Debasis Ganguly<\/li>\n<li style=\"text-align: justify\">Alessandro Moschitti<\/li>\n<li style=\"text-align: justify\">Jun Xu<\/li>\n<li style=\"text-align: justify\">Grady Simon<\/li>\n<li style=\"text-align: justify\">Alexey Borisov<\/li>\n<li style=\"text-align: justify\">Bhaska Mitra<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Call for Papers\"} --><!-- wp:freeform --><p align=\"justify\">We solicit submission of papers of two to six pages (excluding references), representing reports of original research, preliminary research results, proposals for new work, descriptions of neural network based toolkits tailored for IR, and position papers. Papers presented at the workshop will be required to be uploaded to arXiv.org but will be considered <strong>non-archival<\/strong>, and may be submitted elsewhere (modified or not), although the workshop site will maintain a link to the arXiv versions. This makes the workshop a forum for the presentation and discussion of current work, without preventing the work from being published elsewhere.<\/p>\n<p>We are interested in submissions relevant to the following main themes:<\/p>\n<ol>\n<li>The application of neural network models in IR tasks, including but not limited to:\n<ul>\n<li>Full text document retrieval, passage retrieval, question answering<\/li>\n<li>Web search, searching social media, distributed information retrieval, entity ranking<\/li>\n<li>Learning to rank combined with neural network based representation learning<\/li>\n<li>User and task modelling, personalized search, diversity<\/li>\n<li>Query formulation assistance, query recommendation, conversational search<\/li>\n<li>Multimedia retrieval<\/li>\n<\/ul>\n<\/li>\n<li>Fundamental modelling challenges faced in such applications, including but not limited to:\n<ul>\n<li>Learning dense representations for long documents<\/li>\n<li>Dealing with rare queries and rare words<\/li>\n<li>Modelling text at different granularities (character, word, passage, document)<\/li>\n<li>Compositionality of vector representations<\/li>\n<li>Jointly modelling queries, documents, entities and other structured\/knowledge data<\/li>\n<\/ul>\n<\/li>\n<li>Best practices for research and development in the area, dealing with concerns such as:\n<ul>\n<li>Finding sufficient publicly-available training data<\/li>\n<li>Baselines, test data, avoiding overfitting<\/li>\n<li>Neural network toolkits<\/li>\n<li>Real-world use cases, deployment at scale<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p align=\"justify\">All papers will be peer reviewed (single-blind) by the program committee and judged by their relevance to the workshop, especially to the main themes identified above, and their potential to generate discussion. All submissions must be formatted according to the ACM SIG proceedings template. Please note that at least one of the authors of each accepted paper must register for the workshop and present the paper in-person.<\/p>\n<p>Submission url: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/easychair.org\/conferences\/?conf=neuir2016\" target=\"_blank\" rel=\"nofollow\">https:\/\/easychair.org\/conferences\/?conf=neuir2016<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Organization\"} --><!-- wp:freeform --><p><strong>Organizers<\/strong><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nickcr\">Nick Craswell<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft, Bellevue, US<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/ciir.cs.umass.edu\/personnel\/croft.html\">W. Bruce Croft<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Massachusetts, Amherst, US<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.bigdatalab.ac.cn\/~gjf\">Jiafeng Guo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Chinese Academy of Sciences, Beijing, China<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/bmitra\">Bhaskar Mitra<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft, Cambridge, UK<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/staff.fnwi.uva.nl\/m.derijke\">Maarten de Rijke<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Amsterdam, Amsterdam, The Netherlands<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Program Committee<\/strong><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.da.inf.ethz.ch\/people\/CarstenEickhoff\/\">Carsten Eickhoff<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, ETH Zurich<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.computing.dcu.ie\/~dganguly\/\">Debasis Ganguly<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Dublin City University<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kahofman\/\">Katja Hoffman<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft Research<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.hangli-hl.com\/\">Hang Li<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Huawei Technologies<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.cs.nyu.edu\/~mirowski\/\">Piotr Mirowski<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Google DeepMind<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/disi.unitn.it\/moschitti\/\">Alessandro Moschitti<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Qatar Computing Research Institute, HKBU<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/events.yandex.com\/people\/11561\/\">Pavel Serdyukov<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Yandex<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/pomino.isti.cnr.it\/~silvestr\/\">Fabrizio Silvestri<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Yahoo Labs<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www-etud.iro.umontreal.ca\/~sordonia\/\">Alessandro Sordoni<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Montreal<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- \/wp:msr\/content-tabs -->","tab-content":[{"id":0,"name":"Summary","content":"<p style=\"text-align: justify\" align=\"justify\">(The final report on the workshop is available <a href=\"https:\/\/staff.fnwi.uva.nl\/m.derijke\/wp-content\/papercite-data\/pdf\/craswell-report-2016.pdf\" target=\"_blank\">here<\/a>.)<\/p>\r\n<p style=\"text-align: justify\" align=\"justify\">In recent years, deep neural networks have yielded significant performance improvements on speech recognition and computer vision tasks, as well as led to exciting breakthroughs in novel application areas such as automatic voice translation, image captioning, and conversational agents. Despite demonstrating good performance on natural language processing (NLP) tasks, the performance of deep neural networks on IR tasks has had relatively less scrutiny.<\/p>\r\n<p style=\"text-align: justify\" align=\"justify\">The lack of many positive results in the area of information retrieval is partially due to the fact that IR tasks such as ranking are fundamentally different from NLP tasks, but also because the IR and neural network communities are only beginning to focus on the application of these techniques to core information retrieval problems. Given that deep learning has made such a big impact, first on speech processing and computer vision and now, increasingly, also on computational linguistics, it seems clear that deep learning will have a major impact on information retrieval and that this is an ideal time for a workshop in this area. Our focus is on the applicability of deep neural networks to information retrieval: demonstrating performance improvements on public or private information retrieval datasets, identifying key modelling challenges and best practices, and thinking about what insights deep neural network architectures give us about information retrieval problems.<\/p>\r\n<p style=\"text-align: justify\" align=\"justify\"><strong>Neu-IR 2016 <\/strong>will be a highly interactive full day workshop that will provide a forum for academic and industrial researchers working at the intersection of IR and neural networks. The purpose is to provide an opportunity for people to present new work and early results, compare notes on neural network toolkits, share best practices, and discuss the main challenges facing this line of research.<\/p>\r\n<p style=\"text-align: justify\" align=\"justify\">Please use the tabs above to navigate to see the program, the accepted papers and other details of this workshop.<\/p>"},{"id":1,"name":"Program","content":"<p align=\"justify\">Neu-IR will be a highly interactive full day workshop, featuring a mix of presentation and interaction formats. The full schedule is presented below.<\/p>\r\n<strong>Morning Session I<\/strong>\r\n<span style=\"color: #999999\">09:00 \u2013 10:30<\/span>\r\n<p style=\"padding-left: 30px\">Welcome and opening announcements [<a href=\"http:\/\/www.slideshare.net\/BhaskarMitra3\/neuir-2016-opening-note\">slides<\/a>]\r\nBhaskar Mitra\r\n<span style=\"color: #999999\">15 mins<\/span><\/p>\r\n<p style=\"padding-left: 30px\">Keynote: Recurrent Networks and Beyond [<a href=\"http:\/\/www.slideshare.net\/BhaskarMitra3\/recurrent-networks-and-beyond-by-tomas-mikolov\">slides<\/a>]\r\nTomas Mikolov, Facebook AI Research\r\n<span style=\"color: #999999\">45 mins<\/span><\/p>\r\n<p style=\"padding-left: 30px\">Paper: Query Expansion with Locally-Trained Word Embeddings [<a href=\"http:\/\/www.slideshare.net\/BhaskarMitra3\/query-expansion-with-locallytrained-word-embeddings-neuir-2016\">slides<\/a>]\r\nFernando Diaz, Bhaskar Mitra and Nick Craswell\r\n<span style=\"color: #999999\">15 mins<\/span><\/p>\r\n<p style=\"padding-left: 30px\">Paper: Uncertainty in Neural Network Word Embedding Exploration of Potential Threshold [<a href=\"http:\/\/de.slideshare.net\/NavidRekabsaz\/uncertainty-in-neural-network-word-embedding-exploration-of-threshold-for-similarity\">slides<\/a>]\r\nNavid Rekabsaz, Mihai Lupu and Allan Hanbury\r\n<span style=\"color: #999999\">15 mins<\/span><\/p>\r\n<strong>Coffee Break<\/strong>\r\n<span style=\"color: #999999\">10:30 \u2013 11:00<\/span>\r\n\r\n<strong>Morning Session II<\/strong>\r\n<span style=\"color: #999999\">11:00 \u2013 12:30<\/span>\r\n<p style=\"padding-left: 30px\">Lessons from the Trenches [<a href=\"http:\/\/www.slideshare.net\/BhaskarMitra3\/neuir-2016-lessons-from-the-trenches\">slides<\/a>]\r\n<span style=\"color: #999999\">45 mins<\/span><\/p>\r\n<p style=\"padding-left: 30px\">Poster presentations\r\n<span style=\"color: #999999\">45 mins<\/span><\/p>\r\n<strong>Lunch Break<\/strong>\r\n<span style=\"color: #999999\">12:30 \u2013 14:00<\/span>\r\n\r\n<strong>Afternoon Session I<\/strong>\r\n<span style=\"color: #999999\">14:00 \u2013 15:30<\/span>\r\n<p style=\"padding-left: 30px\">Keynote: Does IR Need Deep Learning? [<a href=\"http:\/\/www.hangli-hl.com\/uploads\/3\/4\/4\/6\/34465961\/does_ir_need_deep_learning.pdf\">slides<\/a>]\r\nHang Li, Huawei Technologies\r\n<span style=\"color: #999999\">45 mins<\/span><\/p>\r\n<p style=\"padding-left: 30px\">Paper: Modelling User Preferences using Word Embeddings for Context-Aware Venue Recommendation [<a href=\"https:\/\/drive.google.com\/file\/d\/0BzMK-0IWc2LeU2gzRDNCX0owd2M\/view\">slides<\/a>]\r\nJarana Manotumruksa, Craig Macdonald and Iadh Ounis\r\n<span style=\"color: #999999\">15 mins<\/span><\/p>\r\n<p style=\"padding-left: 30px\">Paper: A Study of MatchPyramid Models on Ad-hoc Retrieval [<a href=\"http:\/\/www.bigdatalab.ac.cn\/~gjf\/papers\/2016\/NEUIR_talk.pdf\">slides<\/a>]\r\nLiang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu and Xueqi Cheng\r\n<span style=\"color: #999999\">15 mins<\/span><\/p>\r\n<p style=\"padding-left: 30px\">Paper: Emulating Human Conversations using Convolutional Neural Network-based IR [<a href=\"http:\/\/www.slideshare.net\/secret\/t6TIb6uEDuMgfB\">slides<\/a>]\r\nAbhay Prakash, Chris Brockett and Puneet Agrawal\r\n<span style=\"color: #999999\">15 mins<\/span><\/p>\r\n<strong>Coffee Break<\/strong>\r\n<span style=\"color: #999999\">15:30 \u2013 16:00<\/span>\r\n\r\n<strong>Afternoon Session II<\/strong>\r\n<span style=\"color: #999999\">16:00 \u2013 17:45<\/span>\r\n<p style=\"padding-left: 30px\">Breakout session\r\n<span style=\"color: #999999\">45 mins<\/span><\/p>\r\n<p style=\"padding-left: 30px\">Breakout session retrospective\r\n<span style=\"color: #999999\">45 mins<\/span><\/p>\r\n<p style=\"padding-left: 30px\">Concluding remarks\r\n<span style=\"color: #999999\">15 mins<\/span><\/p>"},{"id":2,"name":"Keynotes","content":"<h3 align=\"justify\">Recurrent Networks and Beyond<\/h3>\r\n<p align=\"justify\">Tomas Mikolov, Facebook AI Research<\/p>\r\n<p align=\"justify\"><a href=\"https:\/\/research.facebook.com\/tomas-mikolov\"><img class=\"alignleft wp-image-242012 size-thumbnail\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/mikolov-150x150.png\" alt=\"mikolov\" width=\"150\" height=\"150\" \/><\/a>Abstract: In this talk, I will give a brief overview of recurrent networks and their applications. I will then present several extensions that aim to help these powerful models to learn more patterns from training data. This will include a simple modification of the architecture that allows to capture longer context information, and an architecture that allows to learn complex algorithmic patterns. The talk will be concluded with a discussion of a long term research plan on how to advance machine learning techniques towards development of artificial intelligence.<\/p>\r\n<p align=\"justify\">Bio: <a href=\"https:\/\/research.facebook.com\/tomas-mikolov\" target=\"_new\">Tomas Mikolov<\/a>\u00a0is a research scientist at Facebook AI Research since May 2014. Previously\u00a0he has been a member of the Google Brain team, where\u00a0he developed and implemented efficient algorithms for computing distributed representations of words (word2vec project). He obtained his PhD from Brno University of Technology (Czech Republic) for\u00a0his work on recurrent neural network based language models (RNNLM).\u00a0His long term research goal is to develop intelligent machines capable of learning and communication with people using natural language.<\/p>\r\n&nbsp;\r\n<h3 align=\"justify\">Does IR Need Deep Learning?<\/h3>\r\n<p align=\"justify\">Hang Li, Huawei Technologies<\/p>\r\n<p align=\"justify\"><a href=\"http:\/\/www.hangli-hl.com\/\"><img class=\"alignleft wp-image-242015 size-thumbnail\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/HangLi-150x150.jpg\" alt=\"HangLi\" width=\"150\" height=\"150\" \/><\/a>Abstract: In recent years, deep learning has become the key technology of state-of-the-art systems in many areas of computer science, such as computer vision, speech processing, and natural language processing. A question naturally arises, that is, can deep learning also bring breakthrough into IR (information retrieval)? In fact, there has been a large amount of effort made to address the question and significant progress has been achieved. Yet there is still doubt about whether it is the case.<\/p>\r\n<p align=\"justify\">In this talk, I will argue that, if we take a broad view on IR, then we arrive at a conclusion that deep learning can indeed greatly boost IR. Actually it has been observed that deep learning can make great improvements on some hard problems in IR such as question answering from knowledge base, image retrieval, etc; on the other hand, for some traditional IR tasks, in some sense easy tasks, such as document retrieval, the improvements might not be so notable. I will introduce some of the work on deep learning for IR conducted at Huawei Noah\u2019s Ark Lab, to support my claim. I will also make discussions on the strength and limitation of deep learning, IR problems on which deep learning can potentially make significant contributions, as well as future directions of research on IR.<\/p>\r\n<p align=\"justify\">Bio: <a href=\"http:\/\/www.hangli-hl.com\/\">Hang Li<\/a> is director of the Noah\u2019s Ark Lab of Huawei Technologies, adjunct professors of Peking University and Nanjing University. He is ACM Distinguished Scientist. His research areas include information retrieval, natural language processing, statistical machine learning, and data mining. Hang graduated from Kyoto University in 1988 and earned his PhD from the University of Tokyo in 1998. He worked at the NEC lab as researcher during 1991 and 2001, and Microsoft Research Asia as senior researcher and research manager during 2001 and 2012. He joined Huawei Technologies in 2012. Hang has published three technical books, and more than 120 technical papers at top international conferences including SIGIR, WWW, WSDM, ACL, EMNLP, ICML, NIPS, SIGKDD, AAAI, IJCAI, and top international journals including CL, NLE, JMLR, TOIS, IRJ, IPM, TKDE, TWEB, TIST. He and his colleagues\u2019 papers received the SIGKDD\u201908 best application paper award, the SIGIR\u201908 best student paper award, the ACL\u201912 best student paper award. Hang worked on the development of several products such as Microsoft SQL Server 2005, Office 2007, Live Search 2008, Bing 2009, Office 2010, Bing 2010, Office 2012, Huawei Smartphones 2014. He has 42 granted US patents. Hang is also very active in the research communities and has served or is serving top international conferences as PC chair, Senior PC member, or PC member, including SIGIR, WWW, WSDM, ACL, NACL, EMNLP, NIPS, SIGKDD, ICDM, IJCAI, ACML, and top international journals as associate editor or editorial board member, including CL, IRJ, TIST, JASIST, JCST.<\/p>"},{"id":3,"name":"Accepted Papers","content":"<p align=\"justify\">We had 27 submissions (excluding three incomplete submissions). Every paper was reviewed by at least two members of the program committee and finally 19 submission were accepted\u00a0(acceptance rate of 73%). Among the accepted papers, there were a few popular themes. 8 papers were related to learning and applications of word embeddings. 10 papers focused on applications of deep neural networks for different IR tasks. The accepted papers also covered a broad range\u00a0of tasks, including question\/answering, proactive IR, knowledge-based IR,\u00a0conversational models and\u00a0text-to-image, but document ranking was a popular choice with 7 papers using it as the evaluation task.\u00a0The word cloud summary (generated using <a href=\"http:\/\/www.wordle.net\/\">http:\/\/www.wordle.net<\/a>)\u00a0of the abstracts of the accepted papers highlights additional themes across all the submissions.<\/p>\r\n<img class=\"alignnone wp-image-241823 size-full\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/wordcloud-abstracts.png\" alt=\"wordcloud-abstracts\" width=\"812\" height=\"527\" \/>\r\n<p align=\"justify\">Geographically, the accepted papers (based on the first author) ranged from\u00a09 countries and 3 continents (FR: 4, IN: 4, CN: 2, DK: 2, UK: 2, US: 2, AT: 1, FI: 1 and IT: 1). Based on the first author's affiliation, 2 of the accepted papers came from the industry and the rest from academia.<\/p>\r\n&nbsp;\r\n\r\nThe full list of accepted papers is below:\r\n\r\nAn empirical study on large scale text classification with skip-gram embeddings\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.06623\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Georgios Balikas and Massih-Reza Amini<\/span>\r\n\r\nDeep Feature Fusion Network for Answer Quality Prediction in Community Question Answering <a href=\"http:\/\/arxiv.org\/abs\/1606.07103\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Sai Praneeth Suggu, Kushwanth N. Goutham T, Manoj K. Chinnakotla and Manish Shrivastava<\/span>\r\n\r\nSelective Term Proximity Scoring Via BP-ANN\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.07188\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Ju Yang, Rebecca Stones, Gang Wang and Xiaoguang Liu<\/span>\r\n\r\nAdaptability of Neural Networks on Varying Granularity IR Tasks\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.07565\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Daniel Cohen, Qingyao Ai and W. Bruce Croft<\/span>\r\n\r\nEmulating Human Conversations using Convolutional Neural Network-based IR\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.07056\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Abhay Prakash, Chris Brockett and Puneet Agrawal<\/span>\r\n\r\nA Study of MatchPyramid Models on Ad-hoc Retrieval\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.04648\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu and Xueqi Cheng<\/span>\r\n\r\nLearning text representation using recurrent convolutional neural network with highway layers\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.06905\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Ying Wen, Weinan Zhang, Rui Luo and Jun Wang<\/span>\r\n\r\nToward Word Embedding for Personalized Information Retrieval\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.06991\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Nawal Ould Amer, Philippe Mulhem and Mathias G\u00e9ry<\/span>\r\n\r\nToward a Deep Neural Approach for Knowledge-Based IR\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.07211\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Gia-Hung Nguyen, Lynda Tamine, Laure Soulier and Nathalie Bricon-Souf<\/span>\r\n\r\nQuery Expansion with Locally-Trained Word Embeddings\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1605.07891\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Fernando Diaz, Bhaskar Mitra and Nick Craswell<\/span>\r\n\r\nLSTM-Based Predictions for Proactive Information Retrieval\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.06137\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Petri Luukkonen, Markus Koskela and Patrik Flor\u00e9en<\/span>\r\n\r\nPicture It In Your Mind: Generating High Level Visual Representations From Textual Descriptions <a href=\"http:\/\/arxiv.org\/abs\/1606.07287\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Fabio Carrara, Andrea Esuli, Tiziano Fagni, Fabrizio Falchi and Alejandro Moreo Fern\u00e1ndez<\/span>\r\n\r\nUncertainty in Neural Network Word Embedding Exploration of Potential Threshold\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.06086\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Navid Rekabsaz, Mihai Lupu and Allan Hanbury<\/span>\r\n\r\nDeep Learning Relevance: Creating Relevant Information (as Opposed to Retrieving it) <a href=\"http:\/\/arxiv.org\/abs\/1606.07660\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Christina Lioma, Birger Larsen, Casper Petersen and Jakob Grue Simonsen<\/span>\r\n\r\nLearning Dynamic Classes of Events using Stacked Multilayer Perceptron Networks\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.07219\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Nattiya Kanhabua, Huamin Ren and Thomas B. Moeslund<\/span>\r\n\r\nRepresenting Documents and Queries as Sets of Word Embedded Vectors for Information Retrieval\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.07869\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Debasis Ganguly, Dwaipayan Roy, Mandar Mitra and Gareth Jones<\/span>\r\n\r\nUsing Word Embeddings for Automatic Query Expansion\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.07608\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Dwaipayan Roy, Debjyoti Paul and Mandar Mitra<\/span>\r\n\r\nModelling User Preferences using Word Embeddings for Context-Aware Venue Recommendation\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.07828\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Jarana Manotumruksa, Craig Macdonald and Iadh Ounis<\/span>\r\n\r\nUsing Word Embeddings in Twitter Election Classification\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1606.07006\"><img class=\"alignnone wp-image-242024\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/Pdf_icon.png\" alt=\"Pdf_icon\" width=\"27\" height=\"30\" \/><\/a>\r\n<span style=\"color: #999999\">Xiao Yang, Craig Macdonald and Iadh Ounis<\/span>\r\n\r\n&nbsp;"},{"id":4,"name":"Lessons from the Trenches","content":"<p style=\"text-align: justify\">The Lessons from the Trenches will be a series of \"lightning talks\" by\u00a0researchers\u00a0who are actively working in the intersection of information retrieval and neural networks who\u00a0want to share their personal insights and learning with the broader community. In particular, we are hoping to hear about,<\/p>\r\n\r\n<ul style=\"text-align: justify\">\r\n \t<li>Key challenges faced in making neural models work effectively for IR tasks<\/li>\r\n \t<li>Best practices and related insights<\/li>\r\n \t<li>Negative results<\/li>\r\n<\/ul>\r\n&nbsp;\r\n<p style=\"text-align: justify\">The following people have signed-up to present at this session.<\/p>\r\n\r\n<ul>\r\n \t<li style=\"text-align: justify\">Sergey Nikolenko<\/li>\r\n \t<li style=\"text-align: justify\">Qingyao Ai<\/li>\r\n \t<li style=\"text-align: justify\">Debasis Ganguly<\/li>\r\n \t<li style=\"text-align: justify\">Alessandro Moschitti<\/li>\r\n \t<li style=\"text-align: justify\">Jun Xu<\/li>\r\n \t<li style=\"text-align: justify\">Grady Simon<\/li>\r\n \t<li style=\"text-align: justify\">Alexey Borisov<\/li>\r\n \t<li style=\"text-align: justify\">Bhaska Mitra<\/li>\r\n<\/ul>"},{"id":5,"name":"Call for Papers","content":"<p align=\"justify\">We solicit submission of papers of two to six pages (excluding references), representing reports of original research, preliminary research results, proposals for new work, descriptions of neural network based toolkits tailored for IR, and position papers. Papers presented at the workshop will be required to be uploaded to arXiv.org but will be considered <strong>non-archival<\/strong>, and may be submitted elsewhere (modified or not), although the workshop site will maintain a link to the arXiv versions. This makes the workshop a forum for the presentation and discussion of current work, without preventing the work from being published elsewhere.<\/p>\r\nWe are interested in submissions relevant to the following main themes:\r\n<ol>\r\n \t<li>The application of neural network models in IR tasks, including but not limited to:\r\n<ul>\r\n \t<li>Full text document retrieval, passage retrieval, question answering<\/li>\r\n \t<li>Web search, searching social media, distributed information retrieval, entity ranking<\/li>\r\n \t<li>Learning to rank combined with neural network based representation learning<\/li>\r\n \t<li>User and task modelling, personalized search, diversity<\/li>\r\n \t<li>Query formulation assistance, query recommendation, conversational search<\/li>\r\n \t<li>Multimedia retrieval<\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li>Fundamental modelling challenges faced in such applications, including but not limited to:\r\n<ul>\r\n \t<li>Learning dense representations for long documents<\/li>\r\n \t<li>Dealing with rare queries and rare words<\/li>\r\n \t<li>Modelling text at different granularities (character, word, passage, document)<\/li>\r\n \t<li>Compositionality of vector representations<\/li>\r\n \t<li>Jointly modelling queries, documents, entities and other structured\/knowledge data<\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li>Best practices for research and development in the area, dealing with concerns such as:\r\n<ul>\r\n \t<li>Finding sufficient publicly-available training data<\/li>\r\n \t<li>Baselines, test data, avoiding overfitting<\/li>\r\n \t<li>Neural network toolkits<\/li>\r\n \t<li>Real-world use cases, deployment at scale<\/li>\r\n<\/ul>\r\n<\/li>\r\n<\/ol>\r\n<p align=\"justify\">All papers will be peer reviewed (single-blind) by the program committee and judged by their relevance to the workshop, especially to the main themes identified above, and their potential to generate discussion. All submissions must be formatted according to the ACM SIG proceedings template. Please note that at least one of the authors of each accepted paper must register for the workshop and present the paper in-person.<\/p>\r\nSubmission url: <a href=\"https:\/\/easychair.org\/conferences\/?conf=neuir2016\" target=\"_blank\" rel=\"nofollow\">https:\/\/easychair.org\/conferences\/?conf=neuir2016<\/a>"},{"id":6,"name":"Organization","content":"<strong>Organizers<\/strong>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nickcr\">Nick Craswell<\/a>, Microsoft, Bellevue, US\r\n<a href=\"http:\/\/ciir.cs.umass.edu\/personnel\/croft.html\">W. Bruce Croft<\/a>, University of Massachusetts, Amherst, US\r\n<a href=\"http:\/\/www.bigdatalab.ac.cn\/~gjf\">Jiafeng Guo<\/a>, Chinese Academy of Sciences, Beijing, China\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/bmitra\">Bhaskar Mitra<\/a>, Microsoft, Cambridge, UK\r\n<a href=\"https:\/\/staff.fnwi.uva.nl\/m.derijke\">Maarten de Rijke<\/a>, University of Amsterdam, Amsterdam, The Netherlands\r\n\r\n&nbsp;\r\n\r\n<strong>Program Committee<\/strong>\r\n\r\n<a href=\"http:\/\/www.da.inf.ethz.ch\/people\/CarstenEickhoff\/\">Carsten Eickhoff<\/a>, ETH Zurich\r\n<a href=\"http:\/\/www.computing.dcu.ie\/~dganguly\/\">Debasis Ganguly<\/a>, Dublin City University\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kahofman\/\">Katja Hoffman<\/a>, Microsoft Research\r\n<a href=\"http:\/\/www.hangli-hl.com\/\">Hang Li<\/a>, Huawei Technologies\r\n<a href=\"http:\/\/www.cs.nyu.edu\/~mirowski\/\">Piotr Mirowski<\/a>, Google DeepMind\r\n<a href=\"http:\/\/disi.unitn.it\/moschitti\/\">Alessandro Moschitti<\/a>, Qatar Computing Research Institute, HKBU\r\n<a href=\"https:\/\/events.yandex.com\/people\/11561\/\">Pavel Serdyukov<\/a>, Yandex\r\n<a href=\"http:\/\/pomino.isti.cnr.it\/~silvestr\/\">Fabrizio Silvestri<\/a>, Yahoo Labs\r\n<a href=\"http:\/\/www-etud.iro.umontreal.ca\/~sordonia\/\">Alessandro Sordoni<\/a>, University of Montreal"}],"msr_startdate":"2016-07-21","msr_enddate":"2016-07-21","msr_event_time":"","msr_location":"Pisa, Italy","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"July 21, 2016","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":"<img width=\"960\" height=\"360\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/neuir-2016-logo-large-wpbg-2.png\" class=\"img-object-cover\" alt=\"\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/neuir-2016-logo-large-wpbg-2.png 2879w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/neuir-2016-logo-large-wpbg-2-300x112.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/neuir-2016-logo-large-wpbg-2-768x288.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/neuir-2016-logo-large-wpbg-2-1024x384.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/05\/neuir-2016-logo-large-wpbg-2-1920x720.png 1920w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","event_excerpt":"The first international Neu-IR (pronounced \"new IR\") workshop on neural information retrieval will be hosted at SIGIR 2016 in Pisa, Tuscany, Italy on 21 July, 2016.","msr_research_lab":[],"related-researchers":[{"type":"user_nicename","display_name":"Nick Craswell","user_id":33088,"people_section":"Group 1","alias":"nickcr"}],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[267093],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/231283","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":3,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/231283\/revisions"}],"predecessor-version":[{"id":1147357,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/231283\/revisions\/1147357"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/241994"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=231283"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=231283"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=231283"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=231283"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=231283"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=231283"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=231283"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=231283"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=231283"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}