Zing Forum

Reading

Chaotic Intern Env: A Benchmark Framework for Evaluating AI Agents in Chaotic Workplace Environments

This article introduces the chaotic-intern-env project, an OpenEnv environment for evaluating AI agents' performance in ambiguous and contradictory workplace workflows. It tests agents' information filtering, conflict resolution, and decision-making abilities through three progressive tasks.

AI代理基准测试OpenEnv职场模拟决策评估LLM评估信息冲突代理行为自动化测试人工智能
Published 2026-04-09 02:16Recent activity 2026-04-09 02:20Estimated read 7 min
Chaotic Intern Env: A Benchmark Framework for Evaluating AI Agents in Chaotic Workplace Environments
1

Section 01

[Introduction] Chaotic Intern Env: A Benchmark Framework for AI Agents in Chaotic Workplace Environments

This article introduces the chaotic-intern-env project, an OpenEnv environment for evaluating AI agents' performance in ambiguous and contradictory workplace workflows. The project fills the gap in existing AI agent benchmarks that are overly idealized. By simulating chaotic scenarios in tech startups, it uses three progressive tasks to test agents' information filtering, conflict resolution, and decision-making abilities. It adopts a deterministic scoring mechanism, providing an evaluation basis for AI agents to move from 'toy demonstrations' to 'production tools'.

2

Section 02

Project Background and Design Philosophy: Filling the Gap in Real-World Workplace Evaluation

Most existing AI agent benchmarks use structured inputs and clear standards, which are disconnected from real workplace dilemmas like information conflicts, ambiguous authority, and time pressure. chaotic-intern-env builds a virtual tech company 'Veltra AI' to let agents experience chaotic scenarios. It follows OpenEnv standards, with a fully deterministic scorer (no subjective judgment), and agent behaviors are binary (e.g., whether the correct tool is called).

3

Section 03

Virtual Company Character Settings: Creating Realistic Information Conflicts

The project designs 5 roles:

  • Priya Nair (CEO): Final authority; key decisions require written authorization;
  • Jordan Mehta (Engineering Lead): Pursues speed, may bypass processes;
  • Sara Okonkwo (Finance Lead): Rigorous and compliant;
  • Liam Torres (Marketing Manager): Often creates false urgency;
  • Dev Patel (Direct Manager): Instructions may be incomplete or conflicting. The multi-role design requires agents to identify information credibility rather than just accepting surface-level instructions.
4

Section 04

Core Architecture and Interface Design: Standardizing Agent Interactions

Defines clear Action and Observation interfaces:

  • ChaoticInternAction: USE_TOOL (call database/email/calendar/calculator), SEND_MESSAGE, MAKE_DECISION, ASK_CLARIFICATION;
  • ChaoticInternObservation: Task description, inbox messages, tool call results, steps/budget/score, etc. All tools are simulated in Python to ensure repeatability and easy deployment.
5

Section 05

Three Progressive Evaluation Tasks: Testing Capabilities Across Different Dimensions

Task 1: Invoice Processing (Easy)

Scenario: 5 emails containing amount conflicts, duplicate invoices, and irrelevant information. Agents need to verify against the database, mark duplicates, and submit the correct amount. Scoring includes correct amount (40%), duplicate identification (30%), etc.

Task 2: Meeting Scheduling (Medium)

Scenario: Conflicting information from managers, clients, and colleagues. The calendar is the authoritative source; agents need to check the calendar, book the correct date, and notify both parties.

Task 3: Budget Reallocation (Hard)

Scenario: Conflict between the CEO's spending freeze order and the marketing manager's $8000 request. Agents need to block the non-compliant request, escalate to the CEO, and approve the compliant $85 request.

6

Section 06

Scoring Mechanism Design: Two-Tier System Reflecting Real-World Workplaces

Uses a two-tier scoring system:

  • Step-level rewards: Successful tool calls (+0.05), reasonable explanations (+0.02), repeated calls (-0.05), etc.;
  • Episode-level scoring: Weighted calculation based on task standards (0-1.0). Unsafe behaviors (e.g., non-compliant approvals) trigger an irreversible -0.5 penalty.
7

Section 07

Baseline Test Results: Mainstream Models Still Need Improvement

Tested with the llama-3.1-8b-instant model:

  • Invoice processing score: 0.20-0.60 (small models easily exhaust the budget due to incorrect query methods);
  • Meeting scheduling score: 0.60-0.85 (best performance, easy to guess the correct answer);
  • Budget reallocation score: 0.35-0.75 (high volatility, depending on whether CEO instructions are prioritized). Average score: 0.45-0.55, indicating that mainstream models still have room for improvement in complex scenarios.
8

Section 08

Deployment Methods and Project Significance: Moving Toward Practical AI Assistants

Deployment Methods: Supports local deployment (Python3.10+, Docker, uv), Docker images, and Hugging Face Spaces online experience; Significance and Outlook: Reveals the gap between AI agents from demonstration to production. It invites the community to explore how to train agents to identify truth and adhere to principles in chaos, which is a key step toward practical AI assistants.