Loading...
Articles

如何高效、精准地进行图片搜索?看看轻量化视觉预训练模型 

July 28, 2022

编者按:你是否有过图像检索的烦恼?或是难以在海量化的图像中准确地找到所需图像,或是在基于文本的检索中得到差强人意的结果。对于这个难题,微软亚洲研究院和微软云计算与人工智能事业部的研究人员对轻量化视觉模型进行了深入研究,并提出了一系列视觉预训练模型的设计和压缩方法,实现了视觉 Transformer 的轻量化部署需求。目前该方法和模型已成功应用于微软必应搜索引擎,实现了百亿图片的精准、快速推理和检索...

pic
Articles

CVPR 2022 highlights: Frontier research trends in computer vision 

July 28, 2022

The Conference on Computer Vision and Pattern Recognition (CVPR) is regarded as the premier conference in computer vision and pattern recognition. CVPR 2022, held on June 19-23, received a record number of 8,161 full submissions that went into the review…

Articles

文档智能多模态预训练模型LayoutLMv3:兼具通用性与优越性 

July 26, 2022

编者按:企业数字化转型中,以文档、图像等多模态形式为载体的结构化分析和内容提取是其中的关键一环,快速、自动、精准地处理包括合同、票据、报告等信息,对提升现代企业生产效率至关重要。因此,文档智能技术应运而生。过去几年,微软亚洲研究院推出了通用文档理解预训练 LayoutLM 系列研究成果,并不断优化模型对文档中文本、布局和视觉信息的预训练性能。近期发表的最新的 LayoutLM 3.0 版本,在以文...

In the news | Nature

Make greenhouse-gas accounting reliable — build interoperable systems 

July 26, 2022

Global integrated reporting is essential if the planet is to achieve net-zero emissions. In March, the United Nations took its first meaningful step to hold investors, businesses, cities and regions accountable for reducing greenhouse-gas emissions, when UN secretary-general António Guterres…

Articles

无限视觉生成模型NUWA-Infinity让视觉艺术创作自由延伸 

July 22, 2022

编者按:此前,微软亚洲研究院提出了多模态模型 NUWA,它可以基于给定的文本、视觉或多模态输入生成图像或视频,并支持多种视觉艺术作品创建任务,包括文本到图像或视频的生成、图像补全、视频预测等。近日,微软亚洲研究院公开发表了新的研究成果:NUWA 的升级版——无限视觉生成模型 NUWA-Infinity,让视觉艺术创作趋于“无限流”,可生成任意大小的高分辨率图像或长时间视频。一起来感受一下 AI 的...

Articles

Supply Chain News: Impact and Categories 

July 22, 2022

Author: Allie Giddings and Chhaya Methani In our last blog post, we explained how news is surfaced in Supply Chain Insights and how it can be useful for having better risk visibility. Since then, we’ve made two major updates to…

logo, company name
Articles

ICML 2022 | 机器学习前沿论文精选! 

July 21, 2022

编者按:ICML 被认为是人工智能、机器学习领域最顶级的国际会议之一,在计算机科学界享有崇高的声望。ICML 2022 于7月17日-23日以线上线下结合的方式举办。今天我们精选了微软亚洲研究院在此次大会上发表的7篇论文,来为大家进行简要介绍,从强化学习、图神经网络、知识图谱表示学习等关键词带你一览机器学习领域的最新成果! 论文链接:https://arxiv.org/abs/2202.07995...

Three bar plots. The first plot shows that the model size of XTC-BERT is 32 times smaller than that of BERT, and two dots show the accuracy of BERT and XTC-BERT, which are 83.95 and 83.44, respectively. The second one shows that INT8 using ZeroQuant can be 2.6 times faster than Baseline with FP16 using PyTorch and ZeoQuant can reduce the number of GPUs for inference from 2 to 1, which in total provides 5.2 times efficiency. It also shows that ZeroQuant has 50.4 accuracy compared to 50.5 using Baseline PyTorch. The third plot shows that ZeroQuant is more than 5000 times cheaper than baseline to compress a model, and the accuracy of ZeroQuant is 42.26 compared to 42.35 of baseline.
Microsoft Research Blog

DeepSpeed Compression: A composable library for extreme compression and zero-cost quantization 

July 20, 2022 | DeepSpeed Team and Andrey Proskurin

Large-scale models are revolutionizing deep learning and AI research, driving major improvements in language understanding, generating creative texts, multi-lingual translation and many more. But despite their remarkable capabilities, the models’ large size creates latency and cost constraints that hinder the…

In the news | IT Pro

Using AI and machine learning to kickstart climate change fightback 

July 19, 2022

Fighting climate change with carbon capture or geoengineering means harnessing the power of AI and sophisticated data modelling. Advances in artificial intelligence (AI) and machine learning may increase the chances of reducing carbon emissions through carbon capture or geo engineering…

  • Previous
  • 1
  • …
  • 148
  • 149
  • 150
  • 151
  • 152
  • …
  • 568
  • Next