Loading...
Research Focus | October 7, 2024
Microsoft Research Blog

Research Focus: Week of October 7, 2024 

October 9, 2024

Simplifying secure decision tree training; Improving accuracy of audio content detection; A novel neurosymbolic system for converting text to tables; New video series: AI for Business Transformation; TEE security protections for container workloads.

科研上新
Articles

ECCV上新 | 精选计算机视觉领域6篇前沿论文 

October 9, 2024

编者按:欢迎阅读“科研上新”栏目!“科研上新”汇聚了微软亚洲研究院最新的创新成果与科研动态。在这里,你可以快速浏览研究院的亮点资讯,保持对前沿领域的敏锐嗅觉,同时也能找到先进实用的开源工具。 2024年的ECCV(European Conference on Computer Vision)于10月4日在意大利米兰落下帷幕。作为计算机视觉领域的重要国际会议之一,微软亚洲研究院有多篇论文入选。本期的...

Articles

OmniParser for pure vision-based GUI agent 

October 8, 2024

By Yadong Lu, Senior Researcher; Jianwei Yang, Principal Researcher; Yelong Shen, Principal Research Manager; Ahmed Awadallah, Partner Research Manager Recent advancements in large vision-language models (VLMs), such as GPT-4V and GPT-4o, have demonstrated considerable promise in driving intelligent agent systems…

Awards | American Physical Society

Frank Noé, American Physical Society Fellow 

October 4, 2024

The AI for Science partner research manager received this distinction for the development of machine learning methods for advancing the physical sciences, in particular for the many-body sampling problem and the electronic structure problem.

In the news | Fred Hutch Press Release

Cancer centers launch Cancer AI Alliance to unlock discoveries, transform care using cancer data and applied AI 

October 2, 2024

Dana-Farber Cancer Institute, Fred Hutch Cancer Center, Memorial Sloan Kettering Cancer Center, and The Sidney Kimmel Comprehensive Cancer Center and Whiting School of Engineering at Johns Hopkins – have joined forces and secured funding from AI technology leaders AWS, Deloitte,…

White icons representing AI and human computer interaction on a blue to purple to pink gradient.
Microsoft Research Blog

Data Formulator: Exploring how AI can help analysts create rich data visualizations  

October 1, 2024 | Chenglong Wang, Steven Drucker, Dan Marshall, Jeevana Priya Inala, Kori Inkpen, and Jianfeng Gao

Data Formulator investigates combining UI interactions with natural language input. Powered by AI, it can help users create or adapt visualizations and supports continuous refinement through an iterative process. Now available on GitHub.

In the news | Planet

Project Centinela 

October 1, 2024

Planet’s biodiversity site monitoring program delivers an unprecedented array of high-resolution, high-frequency satellite imagery, analytics, and Planetary Variables into the hands of those who are maintaining a lifeline for biodiversity and the communities that depend on that variety of life.

On the left is a simple drawing of the lungs. The drawing shows the borders of the left and right lung as well as the trachea and the left and right main stem bronchi. The text under the drawing reads: Original image. To the right of the drawing are the 3 additional inputs of RadEdit. They are arranged vertically. On top there is an example editing prompt. It reads "Consolidation". Below there is the same drawing of the lung again but this time the left lung is shaded blue. The text reads: Edit mask according to prompt. Lastly, on the bottom, there is the same drawing of the lung but this time the right lung is shaded red. The text reads: "Do not edit mask". On the right of the 3 additional inputs there is a box saying “RadEdit”. Finally, on the right of the figure, there is the drawing of the lung again. The upper part of the left lung is shaded grey. The text reads: Edited image. Between all the elements, the drawing of the lung, the 3 additional inputs, the box that says “RadEdit”, and the edited image, there are arrows pointing to the next element from left to right.
Microsoft Research Blog

Stress-testing biomedical vision models with RadEdit: A synthetic data approach for robust model deployment 

September 30, 2024 | Max Ilse, Daniel Coelho de Castro, and Javier Alvarez-Valle

RadEdit stress-tests biomedical vision models by simulating dataset shifts through precise image editing. It uses diffusion models to create realistic, synthetic datasets, helping to identify model weaknesses and evaluate robustness.

Outline illustration of Daniela Massiceti next to Martin Grayson
Microsoft Research Podcast

Abstracts: September 30, 2024 

September 30, 2024 | Amber Tingle, Daniela Massiceti, and Martin Grayson

The personalizable object recognizer Find My Things was recently recognized for accessible design. Researcher Daniela Massiceti and software development engineer Martin Grayson talk about the research project’s origins and the tech advances making it possible.

  • Previous
  • 1
  • …
  • 57
  • 58
  • 59
  • 60
  • 61
  • …
  • 573
  • Next