Zing Forum

Reading

TRLawBench: A Large Language Model Evaluation Benchmark for the Turkish Legal Domain

TRLawBench is a large language model evaluation benchmark specifically designed for the Turkish legal domain. It systematically assesses AI models' capabilities in legal reasoning and knowledge mastery using real questions from official Turkish exams.

大语言模型法律AI土耳其语基准测试司法考试Gemma 4模型评测法律推理
Published 2026-04-04 03:45Recent activity 2026-04-04 03:50Estimated read 6 min
TRLawBench: A Large Language Model Evaluation Benchmark for the Turkish Legal Domain
1

Section 01

TRLawBench: Introduction to the Large Language Model Evaluation Benchmark for the Turkish Legal Domain

TRLawBench is a large language model evaluation benchmark designed for the Turkish legal domain. It aims to assess AI models' legal reasoning capabilities and knowledge mastery using real questions from official Turkish exams. This benchmark fills the gap in Turkish legal AI evaluation, adopting two evaluation modes (standard mode and reasoning mode). Preliminary tests show that advanced models still have room for improvement in accuracy on this benchmark, which is of great significance for promoting the professionalization and localization of legal AI.

2

Section 02

Background and Motivation of TRLawBench

With the global development of large language models, evaluation in specific professional domains has become a key issue. The legal domain is challenging due to the need for rich knowledge reserves, complex reasoning abilities, and an understanding of the subtle differences in the judicial system. The Turkish legal system integrates civil law and local traditions, and existing general benchmarks cannot capture the uniqueness of its language and legal culture. Therefore, the TRLawBench project was launched to fill the gap in Turkish legal AI evaluation.

3

Section 03

Composition and Sources of the TRLawBench Dataset

The TRLawBench dataset contains 97 carefully selected legal questions, all from past real questions of official Turkish exams. Specific sources include: the Judge and Prosecutor Exam (HMGS), the Legal Entrance Exam for Foreign Students (İYÖS), and professional legal exams organized by the Ministry of Justice. All questions have been verified by legal professionals to ensure accuracy and timeliness. The dataset prioritizes quality over quantity, with each question being a high-quality one used in real exams.

4

Section 04

Evaluation Methods and Preliminary Results of TRLawBench

TRLawBench uses a standardized evaluation process, accessing model tests via the OpenRouter API, and supports two modes:

  1. Standard mode: The model answers questions directly, simulating real exam scenarios;
  2. Reasoning mode: The model shows its thinking process, which helps evaluate the completeness of the reasoning chain. Preliminary results using the Google Gemma 4 31B IT model show: 60.82% accuracy (59/97) in standard mode and 71.13% accuracy (69/97) in reasoning mode, with the reasoning mode significantly improving accuracy.
5

Section 05

Limitations and Future Improvement Directions of TRLawBench

Current TRLawBench has the following limitations:

  1. Limited dataset size (97 questions), with insufficient coverage of legal branches;
  2. Single question type (mainly multiple-choice), lacking open-ended questions and case analyses;
  3. Questions mostly test knowledge memory, with insufficient assessment of deep legal reasoning abilities. Future improvement directions include: expanding the dataset to cover more legal fields, adding open-ended questions, introducing more model comparison evaluations, and developing fine-grained indicators to distinguish between knowledge-based and reasoning-based errors.
6

Section 06

Implications of TRLawBench for AI Legal Applications and Conclusion

The results of TRLawBench have important implications for AI legal applications:

  1. Language specificity is crucial; results from general English benchmarks cannot be directly applied to legal scenarios in other languages;
  2. Reasoning ability is key; mere knowledge memory is insufficient to handle complex legal issues;
  3. Professional domains require specialized evaluation benchmarks; general benchmarks cannot capture the unique challenges of the domain. Conclusion: TRLawBench is an important step toward the professionalization and localization of legal AI evaluation. Although current models have room for improvement, this benchmark helps objectively understand the current situation and point out directions for improvement. It is expected to become a reference standard for the development of legal AI in Turkey and beyond in the future.