Zing Forum

Reading

Optimization of Bengali Question Answering Systems: A Study on Enhancing LLM Performance via Advanced Prompt Engineering

This article introduces the PMSCS-Thesis-Code project and explores how to enhance the performance of large language models (LLMs) on Bengali question answering tasks using advanced prompt engineering techniques.

孟加拉语问答系统提示工程低资源语言大语言模型跨语言迁移
Published 2026-04-03 19:42Recent activity 2026-04-03 19:54Estimated read 5 min
Optimization of Bengali Question Answering Systems: A Study on Enhancing LLM Performance via Advanced Prompt Engineering
1

Section 01

[Introduction] Study on Optimization of Bengali Question Answering Systems: Advanced Prompt Engineering Boosts LLM Performance

This article introduces the PMSCS-Thesis-Code project, which addresses the insufficient performance of Bengali (a low-resource language) question answering systems. It enhances the performance of large language models (LLMs) on this task using advanced prompt engineering techniques, providing valuable practical experience for AI applications in low-resource languages.

2

Section 02

Background: AI Challenges for Low-Resource Languages

The current LLM ecosystem exhibits a clear language bias, with training data, model architectures, and evaluation benchmarks centered on English. Although Bengali has over 230 million speakers, it lacks digital resources—scarce high-quality annotated datasets, limited options for pre-trained models, and almost no dedicated evaluation benchmarks. Thus, it is necessary to leverage the cross-lingual capabilities of general-purpose LLMs to bridge the resource gap.

3

Section 03

Methodology: Advanced Prompt Engineering Strategies and Experimental Design

Prompt engineering is a technique that guides model behavior by optimizing input prompts without modifying model parameters, and it has special value for low-resource languages. The project explores advanced strategies such as few-shot learning, chain-of-thought prompting, and multilingual prompting. The experiments use the controlled variable method, with evaluation metrics covering accuracy, response relevance, and language fluency. It is necessary to build/expand test datasets independently and test mainstream models like GPT and LLaMA. Specific strategies include context learning optimization, instruction fine-tuning style prompts, and multilingual mixing strategies.

4

Section 04

Experimental Results and Key Findings

Advanced prompt engineering leads to significant performance improvements; different strategies perform differently across problem types (chain-of-thought is effective for complex reasoning, while few-shot is better for factual questions); cross-lingual transfer effects are obvious, providing a reusable methodology for AI applications in other low-resource languages.

5

Section 05

Conclusions and Implications for Low-Resource Language NLP

Prompt engineering is an effective strategy for AI applications in low-resource languages, which can improve performance when there is a lack of large annotated datasets and dedicated models. The methodology can be extended to other tasks such as summarization and translation, as well as low-resource languages in South Asia/Africa. Multilingual LLMs have cross-lingual transfer potential; the project embodies the democratization of AI technology, benefiting marginalized language communities.

6

Section 06

Limitations and Future Research Directions

Prompt engineering cannot compensate for the fundamental flaws in model language representation; future directions include combining prompt engineering with lightweight fine-tuning, building larger-scale Bengali evaluation benchmarks, and exploring model distillation techniques to transfer the capabilities of large models to small ones.