DFLOP: A Data-driven Framework for Multimodal LLM Training Pipeline Optimization

  • H. An ,
  • Sihyun Kim ,
  • Chaerim Lim ,
  • Hyunjoong Kim ,
  • ,
  • Sangmin Jung ,
  • Hye Yoon Lee ,
  • Dongwook Kim ,
  • Takki Yu ,
  • Jinkyu Jeong ,
  • Youngsok Kim ,
  • Kwanghyun Park

arXiv

Multimodal Large Language Models (MLLMs) have achieved remarkable advances by integrating text, image, and audio understanding within a unified architecture. However, existing distributed training frameworks remain fundamentally data-blind: they parallelize computation without accounting for variations in input data characteristics. This data unawareness leads to severe computation skew across stages and microbatches, where heterogeneous multimodal inputs incur different processing costs. Consequently, GPU resources are unevenly utilized, synchronization delays accumulate, and overall training efficiency degrades. To address this limitation, we present DFLOP, a data-driven framework for multimodal LLM training pipeline optimization. DFLOP continuously profiles runtime behavior to capture data-induced computation variance and employs predictive scheduling to balance workloads across stages and microbatches. By coupling data characteristics with execution planning, DFLOP substantially improves GPU utilization and throughput. Extensive experiments on large-scale multimodal benchmarks show that DFLOP achieves up to 3.6x faster training compared to state-of-the-art distributed training frameworks.