Section 01
DARE Framework: Infrastructure for Training and Evaluation of Diffusion Large Language Models
DARE is the first systematic training and evaluation platform for diffusion large language models (dLLMs), designed specifically to address the unique challenges in dLLM training optimization. It supports training methods such as supervised fine-tuning (SFT), parameter-efficient fine-tuning (PEFT), and reinforcement learning (RL), integrates inference acceleration and a comprehensive evaluation system, aiming to lower the threshold for dLLM research and application and promote the transition of the technology from academia to practical use.