Section 01
[Introduction] Vetch: An Innovative Tool for Monitoring Energy Consumption and Cost of LLM Inference
Vetch is an energy consumption and cost observability tool launched by Prismatic Labs, designed specifically for large language model (LLM) inference scenarios. It aims to address the pain point where energy consumption and cost during the LLM inference phase are often overlooked. It helps developers and enterprises track the energy consumption and financial costs of LLM calls in real time, supports model selection decisions, cost control, and green AI practices, filling an important gap in the AI infrastructure field.