In the news | Open Compute Project Foundation
Earlier this year, AMD, Arm, Intel, Meta, Microsoft, NVIDIA, and Qualcomm Technologies, Inc. formed the Microscaling Formats (MX) Alliance with the goal of creating and standardizing next-generation 6- and 4-bit data types for AI training and inferencing. The key enabling…
| Boxin Wang, Bo Li, and Zinan Lin
This paper received the outstanding benchmarks track paper award during NeurIPS 2023 (opens in new tab). How trustworthy are generative pre-trained transformer (GPT) models? To answer this question, University of Illinois Urbana-Champaign, together with Stanford University, University of California, Berkeley,…
“Work hard, play harder! ”这是微软亚洲研究院的重要文化,也是实习生们工作和生活的最佳注解。在这里,实习生们不仅在研究员的带领下探索领域前沿,也结识了来自不同高校、不同领域的志同道合的伙伴。这里有笑声不断的节日派对,有挑战自我的 DIY 工作坊,有补充知识的技术充电站,有开拓视野的读书分享会……青春沸腾热烈,我们与你一起书写回忆! 7 月 27 日 夏日派对 给炎炎夏日加点...
| Jack Williams, Ian Drosos, and Kasra Ferdowsi
These research papers were presented at the IEEE Symposium on Visual Languages and Human-Centric Computing (opens in new tab) (VL/HCC 2023), a premier forum for design, theory, and application of computing technologies for programming, modelling, and communication. Large language models…
作者:吕岩 自1956年达特茅斯会议提出“人工智能”一词,人类足足用了近70年的时间,才积累了足够的技术和资源促成人工智能的爆发。而当我们跨过“临界点”,大语言模型(LLMs)在自然语言理解、语音识别、图像生成等方面展现出的一系列巨大进步令人目不暇接。随着 ChatGPT、DALL-E 等应用的出现,我们看到人工智能开始展现出更复杂的能力,比如观察、学习和理解真实世界,并进一步实现推理和创造。 如...
Research Focus: Principal researcher Lester Mackey recognized for pioneering statistical and ML techniques; Pareto frontiers in neural feature learning; structural inequality in the influencer industry; new research on cardinality estimation.
编者按:欢迎阅读“科研上新”栏目!“科研上新”汇聚了微软亚洲研究院最新的创新成果与科研动态。在这里,你可以快速浏览研究院的亮点资讯,保持对前沿领域的敏锐嗅觉,同时也能找到先进实用的开源工具。 本期内容速览 01. AniPortraitGAN:可驱动的真实感3D肖像生成 02. KOSMOS-2.5:阅读文本密集图像的多模态大型语言模型 03. PromptTTS 2:利用文本描述创造语音合成的音...
| Gretchen Huizinga and Sheng Zhang
Researcher Dr. Sheng Zhang joins “Abstracts”—your source for cutting-edge research in brief—to discuss a recent paper on distilling large language models into smaller, more efficient ones capable of excelling in broad application classes.
| Li Lyna Zhang, Jiahang Xu, Quanlu Zhang, Yuqing Yang, Ting Cao, and Mao Yang
A persistent challenge in deep learning is optimizing neural network models for diverse hardware configurations, balancing performance and low latency. Learn how SpaceEvo automates hardware-aware neural architecture search to fine-tune DNN models for swift execution on diverse devices.