Zing Forum

Reading

HEPTA: An Automated Benchmark Framework for Evaluating the Teaching Effectiveness of LLMs in Human-Computer Interaction Education

HEPTA (AI HCI Education Performance Test) is an automated benchmark framework specifically designed to evaluate the teaching effectiveness of large language models (LLMs) in the field of human-computer interaction (HCI) education.

LLM教育AIHCI基准测试教学评估人机交互自动化测试
Published 2026-04-17 01:43Recent activity 2026-04-17 01:55Estimated read 6 min
HEPTA: An Automated Benchmark Framework for Evaluating the Teaching Effectiveness of LLMs in Human-Computer Interaction Education
1

Section 01

[Main Floor] HEPTA: Guide to the Automated Benchmark Framework for Evaluating LLM Teaching Effectiveness in HCI Education

HEPTA (AI HCI Education Performance Test) is an automated benchmark framework specifically designed to evaluate the teaching effectiveness of large language models (LLMs) in the field of human-computer interaction (HCI) education. As LLMs are widely applied in education, how to objectively assess their teaching effectiveness in specialized fields like HCI has become a key issue. HEPTA fills the gap in traditional benchmarks (such as MMLU and HumanEval) that lack dedicated evaluation of teaching quality. Through a systematic framework, it tests AI performance in HCI education, with core components including evaluation dimension design, test dataset construction, and automated evaluation mechanisms, providing a scientific basis for relevant groups.

2

Section 02

Background: Dilemmas in Educational AI Evaluation and the Birth of HEPTA

Although large language models have rich knowledge reserves and language generation capabilities, effective teaching requires accurate concepts, logical coherence, adaptability, inspiration, and domain expertise. Traditional benchmarks mainly evaluate knowledge reserves and reasoning abilities, lacking dedicated assessment of teaching quality. HEPTA was born to address this issue, focusing on measuring effectiveness in educational scenarios.

3

Section 03

HEPTA Framework Design Philosophy and Evaluation Dimensions

HEPTA's design considers the uniqueness of HCI education (interdisciplinary integration of psychology, design, and computer science, including theoretical and practical skills). The evaluation dimensions include: 1. Knowledge Accuracy (mastery of core HCI concepts); 2. Explanation Clarity (concise explanation of complex concepts); 3. Teaching Adaptability (adjusting strategies based on learner's level); 4. Practical Guidance Ability (performance in guiding design practices).

4

Section 04

Test Dataset Construction: Core Support for HEPTA

The HEPTA dataset covers multiple sub-topics of HCI (from basic UI design to advanced interaction technologies). Each use case is reviewed by domain experts to ensure professional representativeness. The question types include concept explanation questions, case analysis questions, comparative analysis questions, and design guidance questions.

5

Section 05

Automated Evaluation Mechanism: Balancing Efficiency and Quality

HEPTA implements a fully automated evaluation process: after sending questions to the model and collecting answers, it uses a three-layer evaluation scoring system—basic layer checks key concepts by matching standard answers, middle layer uses LLMs to evaluate from the perspective of teaching professionalism, and advanced layer uses manual sampling to verify reliability. This hybrid approach balances efficiency and quality.

6

Section 06

Application Scenarios and Value: An Evaluation Tool Benefiting Multiple Groups

HEPTA results are valuable to multiple groups: educational technology developers can select models suitable for teaching scenarios; model developers get targeted improvement directions; educational researchers reveal the current status and limitations of AI teaching capabilities. Practical applications include model selection and regression testing.

7

Section 07

Limitations and Future Directions: Room for Continuous Optimization

HEPTA's limitations: it is only based on text interaction and does not cover multimodal teaching; there is subjectivity in evaluation standards; the dataset needs continuous updates. Future directions: expand evaluation dimensions, explore refined indicators, and establish a longitudinal tracking mechanism to evaluate the coherence of multi-turn dialogue teaching.

8

Section 08

Conclusion: HEPTA Promotes the Maturity of AI Education Applications

HEPTA is an important attempt in the field of AI education evaluation. Through a test framework specifically targeting HCI education scenarios, it provides a scientific basis for evaluating LLM teaching capabilities. It is not only a testing tool but also a catalyst for promoting the maturity of AI education applications, providing a valuable reference framework for relevant developers and researchers.