Section 01
OpenArc: Intel Device-Exclusive Local AI Inference Engine, One-Stop Multimodal Support
OpenArc is an open-source inference engine based on OpenVINO, designed exclusively for Intel devices. It supports local private deployment of multimodal models such as LLM, VLM, speech processing, Embedding, and Reranker, and provides OpenAI-compatible API endpoints. It aims to solve the problem of insufficient AI toolchains for Intel device users, keeping data local while balancing performance and privacy.