Research Focus: Week of October 23, 2023
In this issue: Kosmos-2.5: A Multimodal Literate Model; Can vine copulas explain complex relationships of weather variables; New system accelerates the adaptive training process; Structural inequalities and relational labor in the influencer industry.
LLMLingua Series
Effectively Deliver Information to LLMs via Prompt Compression LLMLingua Read More (opens in new tab) LongLLMLingua Read More (opens in new tab) LLMLingua-2 Read More (opens in new tab) Large language models (LLMs) have demonstrated remarkable…
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
Abstracts: October 9, 2023
Researcher Dr. Sheng Zhang joins “Abstracts”—your source for cutting-edge research in brief—to discuss a recent paper on distilling large language models into smaller, more efficient ones capable of excelling in broad application classes.
MEGA: Multilingual Evaluation of Generative AI
On Surgical Fine-tuning for Language Encoder
LLaVA: Large Language and Vision Assistant
LLaVA is an open-source project, collaborating with research community to advance the state-of-the-art in AI. LLaVA represents the first end-to-end trained large multimodal model (LMM) that achieves impressive chat capabilities mimicking spirits of the multimodal…