{"id":1088157,"date":"2024-10-22T12:57:14","date_gmt":"2024-10-22T19:57:14","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&#038;p=1088157"},"modified":"2025-06-03T11:00:46","modified_gmt":"2025-06-03T18:00:46","slug":"neurips-2024","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/neurips-2024\/","title":{"rendered":"NeurIPS 2024"},"content":{"rendered":"\n\n\n\n\n<p>Microsoft is a proud sponsor of the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/neurips.cc\/Conferences\/2024\" rel=\"noopener noreferrer\" target=\"_blank\">38th Conference on Neural Information Processing Systems<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> (NeurIPS). This interdisciplinary conference brings together researchers in machine learning, neuroscience, statistics, optimization, computer vision, natural language processing, life sciences, natural sciences, social sciences, and other adjacent fields.<\/p>\n\n\n\n<p>We are pleased to share Microsoft has over 100 accepted papers at this year\u2019s conference. Stop by our booth (#445) toward the back of hall A to talk to our team, learn more about research at Microsoft, and explore our open career positions.<\/p>\n\n\n\n<h2 class=\"wp-block-heading h4\" id=\"invited-keynote\">Invited Keynote<\/h2>\n\n\n\n<p>Congratulations to <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/lidongz\/\">Lidong Zhou<\/a>, who has been selected as a keynote speaker. Lidong will speak on co-innovation of AI and Systems.<\/p>\n\n\n\n<p>Are you interested in being contacted for Microsoft career opportunities? Share your contact details here:&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/forms.office.com\/Pages\/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR_IIIGgraKdEniGxBWyWXn5UMFhGTlIxM1NaTjVPNjNGVjBKOTFDRlU3SC4u&origin=QRCode\" target=\"_blank\" rel=\"noopener noreferrer\">Recruiting at Microsoft \u2013 NeurIPS 2024<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading h4\" id=\"orals\">Orals<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/cvqa-culturally-diverse-multilingual-visual-question-answering-benchmark\/\">CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark<\/a>: Pranjal Chitale<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/not-all-tokens-are-what-you-need-for-pretraining\/\">Not All Tokens Are What You Need for Pretraining<\/a>: Yeyun Gong, Xiao Liu, Yelong Shen, Ruochen Xu, Jian Jiao, Nan Duan, Weizhu Chen<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/reinforcement-learning-under-latent-dynamics-toward-statistical-and-algorithmic-modularity\/\">Reinforcement Learning Under Latent Dynamics: Toward Statistical and Algorithmic Modularity<\/a>: Dylan J Foster, Akshay Krishnamurthy<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/vasa-1-lifelike-audio-driven-talking-faces-generated-in-real-time\/\">VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time<\/a>: Sicheng Xu, Guojun Chen, Yu-Xiao Guo, Jiaolong Yang, Chong Li, Zhenyu Zang, Yizhong Zhang, Xin Tong, Baining Gu<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/you-only-cache-once-decoder-decoder-architectures-for-language-models\/\">You Only Cache Once: Decoder-Decoder Architectures for Language Models<\/a>: Li Dong, Yi Zhu, Shaohan Huang, Wenhui Wang, Quanlu Zhang, Furu Wei<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading h4\" id=\"spotlight-sessions\">Spotlight Sessions<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-study-of-plasticity-loss-in-on-policy-deep-reinforcement-learning\/\">A Study of Plasticity Loss in On-Policy Deep Reinforcement Learning<\/a>: Jordan Ash<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/advancing-spiking-neural-networks-for-sequential-modeling-with-central-pattern-generators\/\">Advancing Spiking Neural Networks for Sequential Modeling through Central Pattern Generators<\/a>: Dongqi Han, Yansen Wang<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/beyond-assouad-fano-and-le-cam-toward-unified-lower-bounds-for-statistical-estimation-and-interactive-decision-making\/\">Beyond Assouad, Fano, and Le Cam: Toward Unified Lower Bounds for Statistical Estimation and Interactive Decision Making<\/a>: Dylan J Foster<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bpqp-a-differentiable-convex-optimization-framework-for-efficient-end-to-end-learning\/\">BPQP: A Differentiable Convex Optimization Framework for Efficient End-to-End Learning<\/a>: Xiao Yang, Xu Yang, Weiqing Liu, Lewen Wang, Jiang Bian<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/compositional-generalization-across-distributional-shifts-with-sparse-tree-operations\/\">Compositional Generalization Across Distributional Shifts with Sparse Tree Operations<\/a>: Paul Smolensky, Jianfeng Gao, Roland Fernandez<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/dataset-and-lessons-learned-from-the-2024-satml-llm-capture-the-flag-competition\/\">Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition<\/a>: Reshmi Ghosh, Ahmed Salem, Giovanni Cherubin, Santiago Zanella-Beguelin, Sahar Abdelnabi<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/diffusion-for-world-modeling-visual-details-matter-in-atari\/\">Diffusion for World Modeling: Visual Details Matter in Atari<\/a>: Anssi Kanervisto, Tim Pearce, Cuiling Lan, Yan Lu<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/discoveryworld-a-virtual-environment-for-developing-and-evaluating-automated-scientific-discovery-agents\/\">DiscoveryWorld: A Virtual Environment for Developing and Evaluating Automated Scientific Discovery Agents<\/a>: Marc-Alexandre C\u00f4t\u00e9<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/efficient-adversarial-training-in-llms-with-continuous-attacks\/\">Efficient Adversarial Training in LLMs with Continuous Attacks<\/a>: Alessandro Sordoni<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/erbench-an-entity-relationship-based-automatically-verifiable-hallucination-benchmark-for-large-language-models\/\">ERBench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language Models<\/a>: Ruochen Xu, Xing Xie<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/generalized-linear-bandits-with-limited-adaptivity\/\">Generalized Linear Bandits with Limited Adaptivity<\/a>: Nirjhar Das, Gaurav Sinha<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/human-aware-vision-and-language-navigation-bridging-simulation-to-reality-with-dynamic-human-interactions\/\">Human-Aware Vision-and-Language Navigation: Bridging Simulation to Reality with Dynamic Human Interactions<\/a>: Qi Dai<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/identifying-equivalent-training-dynamics\/\">Identifying Equivalent Training Dynamics<\/a>: Juan Bello-Rivas<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/implicit-curriculum-in-procgen-made-explicit\/\">Implicit Curriculum in Procgen Made Explicit<\/a>: Kaixin Wang<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/is-behavior-cloning-all-you-need-understanding-horizon-in-imitation-learning\/\">Is Behavior Cloning All You Need? Understanding Horizon in Imitation Learning<\/a>: Dylan J Foster, Adam Block<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/minference-1-0-accelerating-pre-filling-for-long-context-llms-via-dynamic-sparse-attention\/\">MInference: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention<\/a>: Huiqiang Jiang, Chengruidong Zhang, Qianhui Wu, Xufang Luo, Surin Ahn, Zhenhua Han, Amir Abdi, Chin-Yew Lin, Yuqing Yang, Lili Qiu<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/the-power-of-resets-in-online-reinforcement-learning\/\">The Power of Resets in Online Reinforcement Learning<\/a>: Dylan J Foster<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/videogui-a-benchmark-for-gui-automation-from-instructional-videos\/\">VideoGUI: A Benchmark for GUI Automation from Instructional Videos<\/a>: Linjie Li, Lijuan Wang<\/li>\n\n\n\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/voila-a-aligning-vision-language-models-with-users-gaze-attention\/\">Voila-A: Aligning Vision-Language Models with User&#8217;s Gaze Attention<\/a>: Lei Ji, Nan Duan<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><strong>General Chair<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/lmackey\/\">Lester Mackey<\/a><\/p>\n\n\n\n<p><strong>Program Chair Assistant<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/t-brahmani\/\">Babak Rahmani<\/a><\/p>\n\n\n\n<p><strong>Competition Chair<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/taoqin\/\">Tao Qin<\/a><\/p>\n\n\n\n<p><strong>Workshop Chair<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/adilsalim\/\">Adil Salim<\/a><\/p>\n\n\n\n<p><strong>Communication Chair<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/lualex\/\">Alex X Lu<\/a><\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-media-text has-vertical-margin-small  has-vertical-padding-none  is-stacked-on-mobile is-style-border\" style=\"grid-template-columns:40% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/03\/MRF-24_WebImage_1400x788-1024x576.png\" alt=\"Microsoft Research Forum - abstract with shapes\" class=\"wp-image-1024539 size-full\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/03\/MRF-24_WebImage_1400x788-1024x576.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/03\/MRF-24_WebImage_1400x788-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/03\/MRF-24_WebImage_1400x788-768x432.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/03\/MRF-24_WebImage_1400x788-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/03\/MRF-24_WebImage_1400x788-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/03\/MRF-24_WebImage_1400x788-240x135.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/03\/MRF-24_WebImage_1400x788-640x360.png 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/03\/MRF-24_WebImage_1400x788-960x540.png 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/03\/MRF-24_WebImage_1400x788-1280x720.png 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/03\/MRF-24_WebImage_1400x788.png 1400w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<h3 class=\"wp-block-heading h4\" id=\"microsoft-research-forum\">Microsoft Research Forum<\/h3>\n\n\n\n<p>Join us for a continuous exchange of ideas about science and technology research in the era of general AI. This series explores recent research advances, bold new ideas, and important discussions with the global research community. Register to attend upcoming episodes and watch previous episodes available on demand.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/researchforum.microsoft.com\/?OCID=msr_researchforum_Accelerator_NEURIPS24_conference_Webpage\" target=\"_blank\" rel=\"noreferrer noopener\">Register now<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n\n\n<h2 class=\"wp-block-heading\" id=\"microsoft-booth-schedule\">Microsoft booth schedule<\/h2>\n\n\n\n<p>Stop by our&nbsp;booth (#445, toward the back of Hall A)&nbsp;to chat with our experts, see demos of our latest research and find out about&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/event\/neurips-2024\/opportunities\/\" target=\"_blank\" rel=\"noreferrer noopener\">career opportunities<\/a>&nbsp;with Microsoft.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Tuesday, December 10<\/th><th>Session title<\/th><th>Speaker<\/th><th>Session type<\/th><\/tr><\/thead><tbody><tr><td>13:30 &#8211; 15:30<\/td><td>Recruitment Hour<\/td><td>Toni Brown-Bell, Cammy Vasquez, Shiloah Vasquez<br>University Recruiting, GTA<\/td><td>Recruitment<\/td><\/tr><tr><td>14:00 &#8211; 14:30<\/td><td>Recruitment Hour<\/td><td>Longqi Yang, E+D Office of Applied Research<br>John Langford<\/td><td>Meet & Greet<\/td><\/tr><tr><td>14:30 &#8211; 15:00<\/td><td>MAIRA-2: Grounded Radiology Reporting<\/td><td>Shruthi Bannur, Kenza Bouzid<br>Microsoft Health Futures<\/td><td>Talk<\/td><\/tr><tr><td>15:00 &#8211; 15:30<\/td><td>Radfact: An LLM-based evaluation metric for AI generated Radiology Reporting<\/td><td>Stephanie Hyland, Daniel Coelho de Castro<br>Microsoft Health Futures<\/td><td>Talk<\/td><\/tr><tr><td>15:30 &#8211; 16:00<\/td><td>AutoGen 0.4: Redefining Agentic AI Systems<\/td><td>AutoGen team<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><tr><td>16:00 &#8211; 16:30<\/td><td>AutoGen 0.4: Redefining Agentic AI Systems<\/td><td>AutoGen team<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><tr><td>16:30 &#8211; 17:00<\/td><td>Empowering Scientific Innovation with Generative AI<\/td><td>Yingce Xia, Tao Qin<br>AI for Science<\/td><td>Talk<\/td><\/tr><tr><td>16:30 &#8211; 17:00<\/td><td>Coffee with the AI for Science team<\/td><td>AI for Science<\/td><td>Coffee Chat<\/td><\/tr><tr><td>17:00 &#8211; 17:30<\/td><td>Meet and Greet with Peeyush Kumar, <em>Senior Researcher, MSR Redmond<\/em> on AI Interaction and Learning<\/td><td>Peeyush Kumar<br>Microsoft Research Redmond<\/td><td>Meet & Greet<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Wednesday, December 11<\/th><th>Session title<\/th><th>Speaker<\/th><th>Session type<\/th><\/tr><\/thead><tbody><tr><td>9:00 &#8211; 9:30<\/td><td>AutoGen 0.4: Redefining Agentic AI Systems<\/td><td>AutoGen team<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><tr><td>9:30 &#8211; 10:00<\/td><td>AutoGen 0.4: Redefining Agentic AI Systems<\/td><td>AutoGen team<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><tr><td>10:00 &#8211; 11:30<\/td><td>Microsoft Research Asia Promotion Session<\/td><td>Microsoft Research Asia<\/td><td>Talks and Meet & Greets<\/td><\/tr><tr><td>12:00 &#8211; 13:00<\/td><td>WHAM: World and Human Action Model<\/td><td>Sam Devlin, Sergio Valcarcel Macua, Sarah Parisot, Linda Wen<br>Microsoft Research Redmond<\/td><td>Demo<\/td><\/tr><tr><td>13:00 &#8211; 13:30<\/td><td>MAIDAP Information Session<\/td><td>Zoey Huang<br>MAIDAP<\/td><td>Info session<\/td><\/tr><tr><td>13:30 &#8211; 14:00<\/td><td>Coffee and Applied Research Hiring<\/td><td>Longqi Yang & OAR Members<br>E+D Office of Applied Research<\/td><td>Recruitment<\/td><\/tr><tr><td>14:00 &#8211; 15:00<\/td><td>Microsoft Research Asia Promotion Session<\/td><td>Microsoft Research Asia<\/td><td>Talk and Meet & Greet<\/td><\/tr><tr><td>14:00 &#8211; 16:00<\/td><td>Recruitment Hour<\/td><td>Toni Brown-Bell, Cammy Vasquez, Shiloah Vasquez<br>University Recruiting, GTA<\/td><td>Recruitment<\/td><\/tr><tr><td>14:30 &#8211; 14:45<\/td><td>Evaluating Generative AI Systems is a Social Science Measurement Challenge<\/td><td>Hanna Wallach<br>Microsoft Research New York<\/td><td>Talk<\/td><\/tr><tr><td>14:45 &#8211; 15:30<\/td><td>Phi Silica and Multimodal Phi Silica with Quarot for Outlier-Free 4-bit inference<\/td><td>Pashmina Cameron, Maximilian Croci, Mohsen Fayyaz<br>Windows ASG, MSR\/MAI<\/td><td>Demo<\/td><\/tr><tr><td>15:30 &#8211; 16:00<\/td><td>Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale<\/td><td>Rogerio Bonatti<br>W&D Applied Science Group<\/td><td>Talk<\/td><\/tr><tr><td>16:00 &#8211; 16:30<\/td><td>Segmentation-aware Multimodal LLM for radiology<\/td><td>Valentina Salvatelli<br>Microsoft Health Futures<\/td><td>Talk<\/td><\/tr><tr><td>16:30 &#8211; 16:45<\/td><td>Evaluating Generative AI Systems is a Social Science Measurement Challenge<\/td><td>Hanna Wallach<br>Microsoft Research New York<\/td><td>Talk<\/td><\/tr><tr><td>16:30 &#8211; 17:00<\/td><td>Coffee with the AI for Science team<\/td><td>AI For Science<\/td><td>Coffee Chat<\/td><\/tr><tr><td>16:45 &#8211; 17:00<\/td><td>Accurate retrosynthesis prediction with Chimera<\/td><td>Krzysztof Maziarz<br>AI For Science<\/td><td>Talk<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Thursday, December 12<\/th><th>Session title<\/th><th>Speaker<\/th><th>Session type<\/th><\/tr><\/thead><tbody><tr><td>9:30 &#8211; 10:00<\/td><td>Phi Silica and Multimodal Phi Silica with Quarot for Outlier-Free 4-bit inference<\/td><td>Pashmina Cameron, Mohsen Fayyaz<br>Windows ASG, MSR\/MAI<\/td><td>Demo<\/td><\/tr><tr><td>10:00 &#8211; 10:30<\/td><td>Efficient\/Interpretable language and multimodal modeling<\/td><td>Chandan Singh, Lucas Liu, Jianwei Yang<br>Microsoft Research, Redmond<br>Deep Learning Group<\/td><td>Coffee Chat<\/td><\/tr><tr><td>10:30 &#8211; 12:30<\/td><td>Recruitment Hour<\/td><td>Toni Brown-Bell, Cammy Vasquez, Shiloah Vasquez<br>University Recruiting, GTA<\/td><td>Recruitment<\/td><\/tr><tr><td>10:30 &#8211; 11:00<\/td><td>Pyramid Vector Quantization for LLMs<\/td><td>Maximilian Croci, Tycho van der Ouderaa<br>MSR\/MAI<\/td><td>Talk<\/td><\/tr><tr><td>11:00 &#8211; 11:30<\/td><td>EUREKA: Evaluating and Understanding Large Foundation Models<\/td><td>Besmira Nushi, Vidhisha Balachandran, Neel Joshi<br>AI Frontiers<\/td><td>Talk<\/td><\/tr><tr><td>11:30 &#8211; 12:00<\/td><td>EUREKA: Evaluating and Understanding Large Foundation Models<\/td><td>Besmira Nushi, Vidhisha Balachandran, Neel Joshi<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><tr><td>12:00 &#8211; 12:30<\/td><td>Generative AI for science discovery<\/td><td>Tao Qin and Yingce Xia<br>AI For Science<\/td><td>Talk<\/td><\/tr><tr><td>12:10 &#8211; 12:40<\/td><td>Coffee with the AI for Science Team<\/td><td>AI For Science<\/td><td>Coffee Chat<\/td><\/tr><tr><td>13:00 &#8211; 14:30<\/td><td>Two podcast sessions hosted by Eliza Strickland from IEEE Spectrum<\/td><td>Chris Bishop and Lidong Zhou<\/td><td>Podcast<\/td><\/tr><tr><td>14:30 &#8211; 15:00<\/td><td>OmniParser for Pure Vision Based GUI Agent<\/td><td>Yadong Lu<br>AI Frontiers<\/td><td>Virtual Demo<\/td><\/tr><tr><td>15:00 &#8211; 15:30<\/td><td>AutoGen 0.4: Redefining Agentic AI Systems<\/td><td>AutoGen team<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><tr><td>15:30 &#8211; 16:00<\/td><td>AutoGen 0.4: Redefining Agentic AI Systems<\/td><td>AutoGen team<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n\n\n<h2 class=\"wp-block-heading\" id=\"microsoft-at-ml4h-2024\">Microsoft at ML4H 2024<\/h2>\n\n\n\n<p>Microsoft is a proud sponsor of the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/ahli.cc\/ml4h\/\" target=\"_blank\" rel=\"noopener noreferrer\">AHLI Machine Learning for Health (ML4H) Symposium<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> which aims to bring together a vibrant community of machine learning researchers, clinicians, and healthcare data experts. ML4H is a separate symposium co-located with NeurIPS that takes place on December 15-16, 2024, at the Pinnacle Hotel Harbourfront in Vancouver, Canada. Stop by our table to meet our researchers and learn more about accepted papers and research at Microsoft.<\/p>\n\n\n\n<p>Microsoft accepted publications:<\/p>\n\n\n\n<p><strong><a href=\"http:\/\/MAIRA-Seg: Enhancing Radiology Report Generation with Segmentation-Aware Multimodal Large Language Models\" target=\"_blank\" rel=\"noreferrer noopener\">MAIRA-Seg: Enhancing Radiology Report Generation with Segmentation-Aware Multimodal Large Language Models<\/a><\/strong><br><em><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/harssharma\/\">Harshita Sharma<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/vsalvatelli\/\">Valentina Salvatelli<\/a>; Shaury srivastav; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kenzabouzid\/\">Kenza Bouzid<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/shbannur\/\">Shruthi Bannur<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dacoelh\/\">Daniel C. Castro<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/maxilse\/\">Maximilian Ilse<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbondtaylor\/\">Sam Bond-Taylor<\/a>; Mercy Prasanna Ranjit; Fabian Falck; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fperezgarcia\/\">Fernando P\u00e9rez-Garc\u00eda<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/antonsc\/\">Anton Schwaighofer<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hamurfet\/\">Hannah Richardson<\/a>; Maria Teodora Wetscherek; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sthyland\/\">Stephanie Hyland<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jaalvare\/\">Javier Alvarez-Valle<\/a><\/em><\/p>\n\n\n\n<p><strong><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/attribute-structuring-improves-llm-based-evaluation-of-clinical-text-summaries\/\" target=\"_blank\" rel=\"noreferrer noopener\">Attribute Structuring Improves LLM-Based Evaluation of Clinical Text Summaries<\/a><\/strong><br><em><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zelalemgero\/\">Zelalem Gero<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chansingh\/\">Chandan Singh<\/a>; Yiqing Xie; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/shezhan\/\">Sheng Zhang<\/a>; Praveen Subramanian; Paul Vozila; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tristan\/\">Tristan Naumann<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jfgao\/\">Jianfeng Gao<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hoifung\/\">Hoifung Poon<\/a><\/em><\/p>\n\n\n\n<p><strong><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-study-on-context-length-and-efficient-transformers-for-biomedical-image-analysis\/\">A Study on Context Length and Efficient Transformers for Biomedical Image Analysis<\/a><\/strong><br><em>Sarah Hooper; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xuehui\/\">Hui Xue<\/a><\/em><\/p>\n\n\n\n<p><strong><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/radphi-3-small-language-models-for-radiology\" target=\"_blank\" rel=\"noreferrer noopener\">RadPhi-3-Instruct: Small Language Models for Radiology<\/a><\/strong><br><em>Mercy Prasanna Ranjit; Shaury srivastav; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/taganu\/\">Tanuja Ganu<\/a><\/em><\/p>\n\n\n","protected":false},"excerpt":{"rendered":"<p>Microsoft is a proud sponsor of the 38th Conference on Neural Information Processing Systems (opens in new tab) (NeurIPS). This interdisciplinary conference brings together researchers in machine learning, neuroscience, statistics, optimization, computer vision, natural language processing, life sciences, natural sciences, social sciences, and other adjacent fields. We are pleased to share Microsoft has over 100 [&hellip;]<\/p>\n","protected":false},"featured_media":1095411,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2024-12-10","msr_enddate":"2024-12-15","msr_location":"Vancouver, BC, Canada","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":false,"msr_hide_image_in_river":null,"footnotes":""},"research-area":[13556,13547],"msr-region":[197900],"msr-event-type":[197941],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[269148,269142],"msr-impact-theme":[],"class_list":["post-1088157","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-research-area-systems-and-networking","msr-region-north-america","msr-event-type-conferences","msr-locale-en_us","msr-post-option-approved-for-river","msr-post-option-include-in-river"],"msr_about":"<!-- wp:msr\/event-details {\"title\":\"Microsoft at NeurIPS 2024\",\"image\":{\"id\":1095411,\"url\":\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/10\/NeurIPS-2024_WebBanner_1920x720.jpg\",\"alt\":\"NeurIPS 2024 event header with abstract background pattern\"}} \/-->\n\n<!-- wp:msr\/content-tabs -->\n<!-- wp:msr\/content-tab -->\n<!-- wp:paragraph {\"placeholder\":\"Write content\u2026\"} -->\n<p>Microsoft is a proud sponsor of the <a href=\"https:\/\/neurips.cc\/Conferences\/2024\" rel=\"noreferrer noopener\" target=\"_blank\">38th Conference on Neural Information Processing Systems<\/a> (NeurIPS). This interdisciplinary conference brings together researchers in machine learning, neuroscience, statistics, optimization, computer vision, natural language processing, life sciences, natural sciences, social sciences, and other adjacent fields.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph {\"placeholder\":\"Write content\u2026\"} -->\n<p>We are pleased to share Microsoft has over 100 accepted papers at this year\u2019s conference. Stop by our booth (#445) toward the back of hall A to talk to our team, learn more about research at Microsoft, and explore our open career positions.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:heading {\"className\":\"h4\"} -->\n<h2 class=\"wp-block-heading h4\" id=\"invited-keynote\">Invited Keynote<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p>Congratulations to <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/lidongz\/\">Lidong Zhou<\/a>, who has been selected as a keynote speaker. Lidong will speak on co-innovation of AI and Systems.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>Are you interested in being contacted for Microsoft career opportunities? Share your contact details here:&nbsp;<a href=\"https:\/\/forms.office.com\/Pages\/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR_IIIGgraKdEniGxBWyWXn5UMFhGTlIxM1NaTjVPNjNGVjBKOTFDRlU3SC4u&amp;origin=QRCode\" target=\"_blank\" rel=\"noreferrer noopener\">Recruiting at Microsoft \u2013 NeurIPS 2024<\/a><\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:heading {\"className\":\"h4\"} -->\n<h2 class=\"wp-block-heading h4\" id=\"orals\">Orals<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:list -->\n<ul class=\"wp-block-list\"><!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/cvqa-culturally-diverse-multilingual-visual-question-answering-benchmark\/\">CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark<\/a>: Pranjal Chitale<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/not-all-tokens-are-what-you-need-for-pretraining\/\">Not All Tokens Are What You Need for Pretraining<\/a>: Yeyun Gong, Xiao Liu, Yelong Shen, Ruochen Xu, Jian Jiao, Nan Duan, Weizhu Chen<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/reinforcement-learning-under-latent-dynamics-toward-statistical-and-algorithmic-modularity\/\">Reinforcement Learning Under Latent Dynamics: Toward Statistical and Algorithmic Modularity<\/a>: Dylan J Foster, Akshay Krishnamurthy<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/vasa-1-lifelike-audio-driven-talking-faces-generated-in-real-time\/\">VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time<\/a>: Sicheng Xu, Guojun Chen, Yu-Xiao Guo, Jiaolong Yang, Chong Li, Zhenyu Zang, Yizhong Zhang, Xin Tong, Baining Gu<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/you-only-cache-once-decoder-decoder-architectures-for-language-models\/\">You Only Cache Once: Decoder-Decoder Architectures for Language Models<\/a>: Li Dong, Yi Zhu, Shaohan Huang, Wenhui Wang, Quanlu Zhang, Furu Wei<\/li>\n<!-- \/wp:list-item --><\/ul>\n<!-- \/wp:list -->\n\n<!-- wp:heading {\"className\":\"h4\"} -->\n<h2 class=\"wp-block-heading h4\" id=\"spotlight-sessions\">Spotlight Sessions<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:list -->\n<ul class=\"wp-block-list\"><!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-study-of-plasticity-loss-in-on-policy-deep-reinforcement-learning\/\">A Study of Plasticity Loss in On-Policy Deep Reinforcement Learning<\/a>: Jordan Ash<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/advancing-spiking-neural-networks-for-sequential-modeling-with-central-pattern-generators\/\">Advancing Spiking Neural Networks for Sequential Modeling through Central Pattern Generators<\/a>: Dongqi Han, Yansen Wang<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/beyond-assouad-fano-and-le-cam-toward-unified-lower-bounds-for-statistical-estimation-and-interactive-decision-making\/\">Beyond Assouad, Fano, and Le Cam: Toward Unified Lower Bounds for Statistical Estimation and Interactive Decision Making<\/a>: Dylan J Foster<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bpqp-a-differentiable-convex-optimization-framework-for-efficient-end-to-end-learning\/\">BPQP: A Differentiable Convex Optimization Framework for Efficient End-to-End Learning<\/a>: Xiao Yang, Xu Yang, Weiqing Liu, Lewen Wang, Jiang Bian<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/compositional-generalization-across-distributional-shifts-with-sparse-tree-operations\/\">Compositional Generalization Across Distributional Shifts with Sparse Tree Operations<\/a>: Paul Smolensky, Jianfeng Gao, Roland Fernandez<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/dataset-and-lessons-learned-from-the-2024-satml-llm-capture-the-flag-competition\/\">Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition<\/a>: Reshmi Ghosh, Ahmed Salem, Giovanni Cherubin, Santiago Zanella-Beguelin, Sahar Abdelnabi<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/diffusion-for-world-modeling-visual-details-matter-in-atari\/\">Diffusion for World Modeling: Visual Details Matter in Atari<\/a>: Anssi Kanervisto, Tim Pearce, Cuiling Lan, Yan Lu<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/discoveryworld-a-virtual-environment-for-developing-and-evaluating-automated-scientific-discovery-agents\/\">DiscoveryWorld: A Virtual Environment for Developing and Evaluating Automated Scientific Discovery Agents<\/a>: Marc-Alexandre C\u00f4t\u00e9<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/efficient-adversarial-training-in-llms-with-continuous-attacks\/\">Efficient Adversarial Training in LLMs with Continuous Attacks<\/a>: Alessandro Sordoni<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/erbench-an-entity-relationship-based-automatically-verifiable-hallucination-benchmark-for-large-language-models\/\">ERBench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language Models<\/a>: Ruochen Xu, Xing Xie<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/generalized-linear-bandits-with-limited-adaptivity\/\">Generalized Linear Bandits with Limited Adaptivity<\/a>: Nirjhar Das, Gaurav Sinha<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/human-aware-vision-and-language-navigation-bridging-simulation-to-reality-with-dynamic-human-interactions\/\">Human-Aware Vision-and-Language Navigation: Bridging Simulation to Reality with Dynamic Human Interactions<\/a>: Qi Dai<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/identifying-equivalent-training-dynamics\/\">Identifying Equivalent Training Dynamics<\/a>: Juan Bello-Rivas<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/implicit-curriculum-in-procgen-made-explicit\/\">Implicit Curriculum in Procgen Made Explicit<\/a>: Kaixin Wang<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/is-behavior-cloning-all-you-need-understanding-horizon-in-imitation-learning\/\">Is Behavior Cloning All You Need? Understanding Horizon in Imitation Learning<\/a>: Dylan J Foster, Adam Block<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/minference-1-0-accelerating-pre-filling-for-long-context-llms-via-dynamic-sparse-attention\/\">MInference: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention<\/a>: Huiqiang Jiang, Chengruidong Zhang, Qianhui Wu, Xufang Luo, Surin Ahn, Zhenhua Han, Amir Abdi, Chin-Yew Lin, Yuqing Yang, Lili Qiu<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/the-power-of-resets-in-online-reinforcement-learning\/\">The Power of Resets in Online Reinforcement Learning<\/a>: Dylan J Foster<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/videogui-a-benchmark-for-gui-automation-from-instructional-videos\/\">VideoGUI: A Benchmark for GUI Automation from Instructional Videos<\/a>: Linjie Li, Lijuan Wang<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/voila-a-aligning-vision-language-models-with-users-gaze-attention\/\">Voila-A: Aligning Vision-Language Models with User's Gaze Attention<\/a>: Lei Ji, Nan Duan<\/li>\n<!-- \/wp:list-item --><\/ul>\n<!-- \/wp:list -->\n\n<!-- wp:separator -->\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n<!-- \/wp:separator -->\n\n<!-- wp:paragraph -->\n<p><strong>General Chair<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/lmackey\/\">Lester Mackey<\/a><\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p><strong>Program Chair Assistant<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/t-brahmani\/\">Babak Rahmani<\/a><\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p><strong>Competition Chair<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/taoqin\/\">Tao Qin<\/a><\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p><strong>Workshop Chair<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/adilsalim\/\">Adil Salim<\/a><\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p><strong>Communication Chair<\/strong>: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/lualex\/\">Alex X Lu<\/a><\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:spacer {\"height\":\"30px\"} -->\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<!-- \/wp:spacer -->\n\n<!-- wp:media-text {\"mediaId\":1024539,\"mediaLink\":\"https:\/\/www.microsoft.com\/en-us\/research\/event\/iclr-2024\/mrf-24_webimage_1400x788-2\/\",\"mediaType\":\"image\",\"mediaWidth\":40,\"backgroundColor\":\"\",\"hasBorder\":true} -->\n<div class=\"wp-block-media-text is-stacked-on-mobile is-style-border\" style=\"grid-template-columns:40% auto\"><figure class=\"wp-block-media-text__media\"><img src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/03\/MRF-24_WebImage_1400x788-1024x576.png\" alt=\"Microsoft Research Forum - abstract with shapes\" class=\"wp-image-1024539 size-full\"\/><\/figure><div class=\"wp-block-media-text__content\"><!-- wp:heading {\"level\":3,\"className\":\"h4\"} -->\n<h3 class=\"wp-block-heading h4\" id=\"microsoft-research-forum\">Microsoft Research Forum<\/h3>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p>Join us for a continuous exchange of ideas about science and technology research in the era of general AI. This series explores recent research advances, bold new ideas, and important discussions with the global research community. Register to attend upcoming episodes and watch previous episodes available on demand.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:buttons -->\n<div class=\"wp-block-buttons\"><!-- wp:button -->\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/researchforum.microsoft.com\/?OCID=msr_researchforum_Accelerator_NEURIPS24_conference_Webpage\" target=\"_blank\" rel=\"noreferrer noopener\">Register now<\/a><\/div>\n<!-- \/wp:button --><\/div>\n<!-- \/wp:buttons --><\/div><\/div>\n<!-- \/wp:media-text -->\n<!-- \/wp:msr\/content-tab -->\n\n<!-- wp:msr\/content-tab {\"title\":\"Booth schedule\"} -->\n<!-- wp:heading -->\n<h2 class=\"wp-block-heading\" id=\"microsoft-booth-schedule\">Microsoft booth schedule<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p>Stop by our&nbsp;booth (#445, toward the back of Hall A)&nbsp;to chat with our experts, see demos of our latest research and find out about&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/event\/neurips-2024\/opportunities\/\" target=\"_blank\" rel=\"noreferrer noopener\">career opportunities<\/a>&nbsp;with Microsoft.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:table -->\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Tuesday, December 10<\/th><th>Session title<\/th><th>Speaker<\/th><th>Session type<\/th><\/tr><\/thead><tbody><tr><td>13:30 - 15:30<\/td><td>Recruitment Hour<\/td><td>Toni Brown-Bell, Cammy Vasquez, Shiloah Vasquez<br>University Recruiting, GTA<\/td><td>Recruitment<\/td><\/tr><tr><td>14:00 - 14:30<\/td><td>Recruitment Hour<\/td><td>Longqi Yang, E+D Office of Applied Research<br>John Langford<\/td><td>Meet &amp; Greet<\/td><\/tr><tr><td>14:30 - 15:00<\/td><td>MAIRA-2: Grounded Radiology Reporting<\/td><td>Shruthi Bannur, Kenza Bouzid<br>Microsoft Health Futures<\/td><td>Talk<\/td><\/tr><tr><td>15:00 - 15:30<\/td><td>Radfact: An LLM-based evaluation metric for AI generated Radiology Reporting<\/td><td>Stephanie Hyland, Daniel Coelho de Castro<br>Microsoft Health Futures<\/td><td>Talk<\/td><\/tr><tr><td>15:30 - 16:00<\/td><td>AutoGen 0.4: Redefining Agentic AI Systems<\/td><td>AutoGen team<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><tr><td>16:00 - 16:30<\/td><td>AutoGen 0.4: Redefining Agentic AI Systems<\/td><td>AutoGen team<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><tr><td>16:30 - 17:00<\/td><td>Empowering Scientific Innovation with Generative AI<\/td><td>Yingce Xia, Tao Qin<br>AI for Science<\/td><td>Talk<\/td><\/tr><tr><td>16:30 - 17:00<\/td><td>Coffee with the AI for Science team<\/td><td>AI for Science<\/td><td>Coffee Chat<\/td><\/tr><tr><td>17:00 - 17:30<\/td><td>Meet and Greet with Peeyush Kumar, <em>Senior Researcher, MSR Redmond<\/em> on AI Interaction and Learning<\/td><td>Peeyush Kumar<br>Microsoft Research Redmond<\/td><td>Meet &amp; Greet<\/td><\/tr><\/tbody><\/table><\/figure>\n<!-- \/wp:table -->\n\n<!-- wp:table -->\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Wednesday, December 11<\/th><th>Session title<\/th><th>Speaker<\/th><th>Session type<\/th><\/tr><\/thead><tbody><tr><td>9:00 - 9:30<\/td><td>AutoGen 0.4: Redefining Agentic AI Systems<\/td><td>AutoGen team<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><tr><td>9:30 - 10:00<\/td><td>AutoGen 0.4: Redefining Agentic AI Systems<\/td><td>AutoGen team<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><tr><td>10:00 - 11:30<\/td><td>Microsoft Research Asia Promotion Session<\/td><td>Microsoft Research Asia<\/td><td>Talks and Meet &amp; Greets<\/td><\/tr><tr><td>12:00 - 13:00<\/td><td>WHAM: World and Human Action Model<\/td><td>Sam Devlin, Sergio Valcarcel Macua, Sarah Parisot, Linda Wen<br>Microsoft Research Redmond<\/td><td>Demo<\/td><\/tr><tr><td>13:00 - 13:30<\/td><td>MAIDAP Information Session<\/td><td>Zoey Huang<br>MAIDAP<\/td><td>Info session<\/td><\/tr><tr><td>13:30 - 14:00<\/td><td>Coffee and Applied Research Hiring<\/td><td>Longqi Yang &amp; OAR Members<br>E+D Office of Applied Research<\/td><td>Recruitment<\/td><\/tr><tr><td>14:00 - 15:00<\/td><td>Microsoft Research Asia Promotion Session<\/td><td>Microsoft Research Asia<\/td><td>Talk and Meet &amp; Greet<\/td><\/tr><tr><td>14:00 - 16:00<\/td><td>Recruitment Hour<\/td><td>Toni Brown-Bell, Cammy Vasquez, Shiloah Vasquez<br>University Recruiting, GTA<\/td><td>Recruitment<\/td><\/tr><tr><td>14:30 - 14:45<\/td><td>Evaluating Generative AI Systems is a Social Science Measurement Challenge<\/td><td>Hanna Wallach<br>Microsoft Research New York<\/td><td>Talk<\/td><\/tr><tr><td>14:45 - 15:30<\/td><td>Phi Silica and Multimodal Phi Silica with Quarot for Outlier-Free 4-bit inference<\/td><td>Pashmina Cameron, Maximilian Croci, Mohsen Fayyaz<br>Windows ASG, MSR\/MAI<\/td><td>Demo<\/td><\/tr><tr><td>15:30 - 16:00<\/td><td>Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale<\/td><td>Rogerio Bonatti<br>W&amp;D Applied Science Group<\/td><td>Talk<\/td><\/tr><tr><td>16:00 - 16:30<\/td><td>Segmentation-aware Multimodal LLM for radiology<\/td><td>Valentina Salvatelli<br>Microsoft Health Futures<\/td><td>Talk<\/td><\/tr><tr><td>16:30 - 16:45<\/td><td>Evaluating Generative AI Systems is a Social Science Measurement Challenge<\/td><td>Hanna Wallach<br>Microsoft Research New York<\/td><td>Talk<\/td><\/tr><tr><td>16:30 - 17:00<\/td><td>Coffee with the AI for Science team<\/td><td>AI For Science<\/td><td>Coffee Chat<\/td><\/tr><tr><td>16:45 - 17:00<\/td><td>Accurate retrosynthesis prediction with Chimera<\/td><td>Krzysztof Maziarz<br>AI For Science<\/td><td>Talk<\/td><\/tr><\/tbody><\/table><\/figure>\n<!-- \/wp:table -->\n\n<!-- wp:table -->\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Thursday, December 12<\/th><th>Session title<\/th><th>Speaker<\/th><th>Session type<\/th><\/tr><\/thead><tbody><tr><td>9:30 - 10:00<\/td><td>Phi Silica and Multimodal Phi Silica with Quarot for Outlier-Free 4-bit inference<\/td><td>Pashmina Cameron, Mohsen Fayyaz<br>Windows ASG, MSR\/MAI<\/td><td>Demo<\/td><\/tr><tr><td>10:00 - 10:30<\/td><td>Efficient\/Interpretable language and multimodal modeling<\/td><td>Chandan Singh, Lucas Liu, Jianwei Yang<br>Microsoft Research, Redmond<br>Deep Learning Group<\/td><td>Coffee Chat<\/td><\/tr><tr><td>10:30 - 12:30<\/td><td>Recruitment Hour<\/td><td>Toni Brown-Bell, Cammy Vasquez, Shiloah Vasquez<br>University Recruiting, GTA<\/td><td>Recruitment<\/td><\/tr><tr><td>10:30 - 11:00<\/td><td>Pyramid Vector Quantization for LLMs<\/td><td>Maximilian Croci, Tycho van der Ouderaa<br>MSR\/MAI<\/td><td>Talk<\/td><\/tr><tr><td>11:00 - 11:30<\/td><td>EUREKA: Evaluating and Understanding Large Foundation Models<\/td><td>Besmira Nushi, Vidhisha Balachandran, Neel Joshi<br>AI Frontiers<\/td><td>Talk<\/td><\/tr><tr><td>11:30 - 12:00<\/td><td>EUREKA: Evaluating and Understanding Large Foundation Models<\/td><td>Besmira Nushi, Vidhisha Balachandran, Neel Joshi<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><tr><td>12:00 - 12:30<\/td><td>Generative AI for science discovery<\/td><td>Tao Qin and Yingce Xia<br>AI For Science<\/td><td>Talk<\/td><\/tr><tr><td>12:10 - 12:40<\/td><td>Coffee with the AI for Science Team<\/td><td>AI For Science<\/td><td>Coffee Chat<\/td><\/tr><tr><td>13:00 - 14:30<\/td><td>Two podcast sessions hosted by Eliza Strickland from IEEE Spectrum<\/td><td>Chris Bishop and Lidong Zhou<\/td><td>Podcast<\/td><\/tr><tr><td>14:30 - 15:00<\/td><td>OmniParser for Pure Vision Based GUI Agent<\/td><td>Yadong Lu<br>AI Frontiers<\/td><td>Virtual Demo<\/td><\/tr><tr><td>15:00 - 15:30<\/td><td>AutoGen 0.4: Redefining Agentic AI Systems<\/td><td>AutoGen team<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><tr><td>15:30 - 16:00<\/td><td>AutoGen 0.4: Redefining Agentic AI Systems<\/td><td>AutoGen team<br>AI Frontiers<\/td><td>Demo<\/td><\/tr><\/tbody><\/table><\/figure>\n<!-- \/wp:table -->\n\n<!-- wp:spacer {\"height\":\"30px\"} -->\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<!-- \/wp:spacer -->\n<!-- \/wp:msr\/content-tab -->\n\n<!-- wp:msr\/content-tab {\"title\":\"ML4H\"} -->\n<!-- wp:heading -->\n<h2 class=\"wp-block-heading\" id=\"microsoft-at-ml4h-2024\">Microsoft at ML4H 2024<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p>Microsoft is a proud sponsor of the <a href=\"https:\/\/ahli.cc\/ml4h\/\" target=\"_blank\" rel=\"noreferrer noopener\">AHLI Machine Learning for Health (ML4H) Symposium<\/a> which aims to bring together a vibrant community of machine learning researchers, clinicians, and healthcare data experts. ML4H is a separate symposium co-located with NeurIPS that takes place on December 15-16, 2024, at the Pinnacle Hotel Harbourfront in Vancouver, Canada. Stop by our table to meet our researchers and learn more about accepted papers and research at Microsoft.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>Microsoft accepted publications:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p><strong><a href=\"http:\/\/MAIRA-Seg: Enhancing Radiology Report Generation with Segmentation-Aware Multimodal Large Language Models\" target=\"_blank\" rel=\"noreferrer noopener\">MAIRA-Seg: Enhancing Radiology Report Generation with Segmentation-Aware Multimodal Large Language Models<\/a><\/strong><br><em><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/harssharma\/\">Harshita Sharma<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/vsalvatelli\/\">Valentina Salvatelli<\/a>; Shaury srivastav; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kenzabouzid\/\">Kenza Bouzid<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/shbannur\/\">Shruthi Bannur<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dacoelh\/\">Daniel C. Castro<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/maxilse\/\">Maximilian Ilse<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbondtaylor\/\">Sam Bond-Taylor<\/a>; Mercy Prasanna Ranjit; Fabian Falck; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fperezgarcia\/\">Fernando P\u00e9rez-Garc\u00eda<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/antonsc\/\">Anton Schwaighofer<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hamurfet\/\">Hannah Richardson<\/a>; Maria Teodora Wetscherek; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sthyland\/\">Stephanie Hyland<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jaalvare\/\">Javier Alvarez-Valle<\/a><\/em><\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p><strong><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/attribute-structuring-improves-llm-based-evaluation-of-clinical-text-summaries\/\" target=\"_blank\" rel=\"noreferrer noopener\">Attribute Structuring Improves LLM-Based Evaluation of Clinical Text Summaries<\/a><\/strong><br><em><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zelalemgero\/\">Zelalem Gero<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chansingh\/\">Chandan Singh<\/a>; Yiqing Xie; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/shezhan\/\">Sheng Zhang<\/a>; Praveen Subramanian; Paul Vozila; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tristan\/\">Tristan Naumann<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jfgao\/\">Jianfeng Gao<\/a>; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hoifung\/\">Hoifung Poon<\/a><\/em><\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p><strong><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-study-on-context-length-and-efficient-transformers-for-biomedical-image-analysis\/\">A Study on Context Length and Efficient Transformers for Biomedical Image Analysis<\/a><\/strong><br><em>Sarah Hooper; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xuehui\/\">Hui Xue<\/a><\/em><\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p><strong><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/radphi-3-small-language-models-for-radiology\" target=\"_blank\" rel=\"noreferrer noopener\">RadPhi-3-Instruct: Small Language Models for Radiology<\/a><\/strong><br><em>Mercy Prasanna Ranjit; Shaury srivastav; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/taganu\/\">Tanuja Ganu<\/a><\/em><\/p>\n<!-- \/wp:paragraph -->\n<!-- \/wp:msr\/content-tab -->\n<!-- \/wp:msr\/content-tabs -->","tab-content":[],"msr_startdate":"2024-12-10","msr_enddate":"2024-12-15","msr_event_time":"","msr_location":"Vancouver, BC, Canada","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"December 10, 2024","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/10\/NeurIPS-2024_WebBanner_1920x720-960x540.jpg\" class=\"img-object-cover\" alt=\"NeurIPS 2024 event header with abstract background pattern\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/10\/NeurIPS-2024_WebBanner_1920x720-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/10\/NeurIPS-2024_WebBanner_1920x720-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/10\/NeurIPS-2024_WebBanner_1920x720-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/10\/NeurIPS-2024_WebBanner_1920x720-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2024\/10\/NeurIPS-2024_WebBanner_1920x720-1280x720.jpg 1280w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","event_excerpt":"Microsoft is a proud sponsor of the 38th Conference on Neural Information Processing Systems (opens in new tab) (NeurIPS). This interdisciplinary conference brings together researchers in machine learning, neuroscience, statistics, optimization, computer vision, natural language processing, life sciences, natural sciences, social sciences, and other adjacent fields. We are pleased to share Microsoft has over 100 accepted papers at this year\u2019s conference. Stop by our booth (#445) toward the back of hall A to talk to&hellip;","msr_research_lab":[],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[583324],"related-projects":[],"related-opportunities":[1095579,1097910],"related-publications":[1028094,1041816,1041822,1047069,1053816,1054284,1056924,1068621,1074012,1080309,1088148,1090737,1090749,1090755,1090770,1090779,1090785,1090797,1090803,1090809,1090815,1090833,1090842,1090848,1090854,1090860,1090866,1090872,1090887,1090905,1091169,1091175,1091181,1091190,1091202,1091220,1091301,1091319,1091358,1091370,1091379,1091754,1091760,1091766,1091772,1091778,1091787,1091793,1091799,1091808,1091832,1091838,1091844,1091868,1091874,1091880,1091886,1091895,1091901,1091910,1091916,1091922,1091931,1091937,1091943,1091949,1091955,1091961,1091967,1091973,1091979,1091985,1092006,1092021,1092045,1092348,1092354,1092360,1092366,1092375,1092393,1092399,1092477,1092486,1092492,1092537,1092543,1092549,1092555,1092561,1092720,1092726,1092741,1092837,1092843,1092891,1092909,1092918,1092924,1092948,1092957,1092963,1092969,1092981,1092987,1093014,1093026,1093038,1093047,1093059,1093068,1093074,1093083,1093095,1093122,1099677,1105104,1105596,1107723,1113519,1113531,1114071,1114080,1145881],"related-videos":[],"related-posts":[1107372,1107414,1107426,1107435,1112202,1112472,1112814],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/1088157","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":73,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/1088157\/revisions"}],"predecessor-version":[{"id":1146730,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/1088157\/revisions\/1146730"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1095411"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1088157"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1088157"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1088157"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1088157"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=1088157"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1088157"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=1088157"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1088157"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1088157"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}