Section 01
[Introduction] LLMEnergyMeasure: An Industrial-Grade Benchmark Framework for Energy Efficiency Evaluation of Large Language Model Inference
LLMEnergyMeasure is a research framework for the inference efficiency of large language models (LLMs), providing MLPerf-style benchmark tests to comprehensively evaluate LLM inference performance from three dimensions: energy consumption, throughput, and computational complexity. It aims to fill the gap where existing tools ignore energy consumption, assist enterprises in scenarios such as hardware selection, optimization strategy verification, and carbon footprint accounting, and promote the sustainable development of the AI industry.