Zing Forum

Reading

Implementation of Open-ended Commands in Autonomous Driving: An LLM-driven Multi-planner Scheduling Framework

This paper proposes an instruction implementation framework based on large language models (LLMs), which converts passengers' natural language commands into executable vehicle control signals by scheduling multiple MPC motion planners, achieving effective decoupling between semantic reasoning and vehicle control.

自动驾驶大语言模型人机交互运动规划MPC自然语言理解多规划器调度智能交通
Published 2026-04-09 17:32Recent activity 2026-04-10 09:46Estimated read 6 min
Implementation of Open-ended Commands in Autonomous Driving: An LLM-driven Multi-planner Scheduling Framework
1

Section 01

[Introduction] LLM-driven Multi-planner Scheduling Framework: A New Solution for Open-ended Command Implementation in Autonomous Driving

This paper proposes an instruction implementation framework based on large language models (LLMs), which converts passengers' natural language commands into executable vehicle control signals by scheduling multiple MPC motion planners, achieving effective decoupling between semantic reasoning and vehicle control. This framework addresses the limitations of traditional autonomous driving systems in handling open-ended commands, improving task completion rate, safety, and decision interpretability.

2

Section 02

Background: Challenges in Autonomous Driving Human-Machine Interaction and Limitations of Traditional Methods

New Challenges in Human-Machine Interaction

In the development of autonomous driving technology, existing HMI research mostly focuses on driver interaction, ignoring passengers' open-ended control needs (such as commands like "drive slower to enjoy the scenery"). The accuracy and interpretability of converting these commands into control signals are key challenges.

Limitations of Traditional Methods

Traditional layered architectures rely on predefined command mappings and lack flexibility in open-ended language processing; the tightly coupled design where high-level semantics are directly mapped to low-level control leads to difficulties in handling complex commands and decision black-box issues, affecting safety verification.

3

Section 03

Core Method: Three-layer Architecture Design with Centralized Scheduling

The framework adopts a centralized scheduling design to achieve decoupling between semantic reasoning and control:

  1. LLM Semantic Parsing Layer: Deeply understands the intent, constraints, and priorities of commands (e.g., parsing the trade-off between the dual goals of "arrive as soon as possible but without jolting");
  2. Scheduling Script Generation Layer: Generates scheduling instructions based on parsing results, dynamically selecting and combining multiple MPC planners (each responsible for specific optimization goals such as shortest path, smooth driving);
  3. Trajectory-to-Control Layer: Reuses mature control strategies to convert planned trajectories into control signals such as accelerator and brake. The architecture builds a transparent and traceable decision chain, facilitating safety audits and fault detection.
4

Section 04

Experimental Evidence: Closed-loop Evaluation and Performance

Closed-loop Evaluation Benchmark

A closed-loop evaluation benchmark simulating real-world scenarios is built, supporting multi-dimensional evaluation such as command understanding accuracy, task completion rate, and safety compliance, addressing the lack of existing tools.

Experimental Results

  • Task completion rate is significantly better than the baseline, benefiting from LLM semantic understanding and multi-planner flexibility;
  • Controls LLM query costs through intelligent script reuse and caching;
  • Safety compliance is comparable to professional autonomous driving methods;
  • Robust to LLM reasoning delays, as the underlying scheduling can continue to run based on existing scripts.
5

Section 05

Conclusion and Future Directions

Technical Insights

Demonstrates the feasibility of natural language as a control interface for autonomous driving; the centralized scheduling architecture can be extended to other robot semantic-control collaboration scenarios.

Future Directions

Expand context awareness (combining passenger preferences and road conditions), explore multi-modal interaction (voice + gesture + vision), and improve real-time performance and edge deployment capabilities.

Conclusion

This framework takes an important step toward building a natural, transparent, and reliable autonomous driving interaction experience; in the future, it is expected to make autonomous vehicles intelligent travel partners that understand passengers' intentions.