新闻与深度文章
In this issue, we examine a new conversation segmentation method that delivers more coherent and personalized agent conversation, and we review efforts to improve MLLMs’ understanding of geologic maps. Check out the latest research and other updates.
| Dongsheng Li, Dongqi Han, 和 Yansen Wang
Researchers and their collaborators are drawing inspiration from the brain to develop more sustainable AI models. Projects like CircuitNet and CPG-PE improve performance and energy efficiency by mimicking the brain's neural patterns.
Learn what’s next for AI at Research Forum on Sept. 3; WizardArena simulates human-annotated chatbot games; MInference speeds pre-filling for long-context LLMs via dynamic sparse attention; Reef: Fast succinct non-interactive zero-knowledge regex proofs.
| Xinyang Jiang, Yubin Wang, Dongsheng Li, 和 Cairong Zhao
Using LLMs to create structured graphs of image descriptors can enhance the images generated by visual language models. Learn how structured knowledge can improve prompt tuning for both visual and language comprehension.
| Shun Zheng, Jiang Bian, Tie-Yan Liu, Li Zhao, Tao Qin, Yue Wang, Dongsheng Li, Yuqing Yang, 和 Xufang Luo
ICLR (International Conference on Learning Representations) (opens in new tab) is recognized as one of the top conferences in the field of deep learning. Many influential papers on artificial intelligence, statistics, and data science—as well as important application fields such…