In the news | The National
Plans to create a centre, founded and funded by Microsoft and G42, that will "identify, develop and advance best practices and industry standards for the responsible use of AI in the Middle East and the Global South" were unveiled. Microsoft,…
In the news | Live Science
From disaster recovery to conservation and healthcare, plenty of AI projects will greatly benefit humanity, Microsoft experts Juan M. Lavista Ferres and William B. Weeks say. Over the last few years, artificial intelligence (AI) has been firmly in the world's spotlight, and…
结束一天的科研工作,中国科学院计算技术研究所博士生刘国栋走出微软大厦,抬头是繁星点点,耳机里装的是苏打绿乐队温暖的旋律。 通过“星跃计划”在微软亚洲研究院实习一年,刘国栋在微软亚洲研究院主管研究员苗又山、微软雷德蒙研究院高级研发工程师 Saeed Maleki 两位 mentor 的指导下,围绕加速深度学习模型的训练进行着科研探索。 恰如设立初衷,“星跃计划”在优秀人才与微软全球两大研究院的研究团...
作者:机器学习组 编者按:研究与开发(R&D)是推动社会进步、经济增长和技术创新的核心动力。在人工智能时代,如何充分激发大语言模型的潜力,通过自动化手段提升研究与开发效率,实现跨领域知识迁移与创新,已成为 R&D 智能化转型的关键。为应对这一挑战,微软亚洲研究院推出了自动化研究与开发工具 RD-Agent,依托大语言模型的强大能力,开创了以人工智能驱动 R&D 流程自动化的...
Investigating vulnerabilities in LLMs; A novel total-duration-aware (TDA) duration model for text-to-speech (TTS); Generative expert metric system through iterative prompt priming; Integrity protection in 5G fronthaul networks:
作者:刘树杰 编者按:文本到语音合成(Text-to-Speech,TTS)是一种将书面文字转化为自然语音的技术,在提高无障碍性、增强跨语言交流等方面发挥着重要作用。微软亚洲研究院此前推出了第一个离散编码的语音大模型 VALL-E,并在此基础上通过重复感知采样和分组编码建模技术将其升级为 VALL-E 2 版本。新版本突破了语音稳健性、自然度和说话人相似度方面的界限,让零样本 TTS 性能在 Li...
| Robert Osazuwa Ness
Medfuzz tests LLMs by breaking benchmark assumptions, exposing vulnerabilities to bolster real-world accuracy.
Author: Shujie Liu In recent years, the rapid advancement of AI has continually expanded the capabilities of Text-to-Speech (TTS) technology. Ongoing optimizations and innovations in TTS have enriched and simplified voice interaction experiences. These research developments hold significant potential across…
| Alonso Guevara Fernández, Katy Smith, Joshua Bradley, Darren Edge, Ha Trinh, Sarah Smith, Ben Cutler, Steven Truitt, and Jonathan Larson
GraphRAG uses LLM-generated knowledge graphs to substantially improve complex Q&A over retrieval-augmented generation (RAG). Discover automatic tuning of GraphRAG for new datasets, making it more accurate and relevant.