Section 01
[Introduction] Context-Enhanced Fine-Tuning: A New Method to Improve the Comprehension Ability of Large Language Models
This project explores an innovative method to enhance static datasets with contextual information to improve the depth of comprehension and response quality of large language models (LLMs). By combining data simulation and synthetic data creation techniques, and using LoRA for parameter-efficient fine-tuning, it aims to build more reliable and fair AI systems. The core idea is to inject relevant background information into static samples to address issues such as missing context, insufficient domain adaptability, and biases in traditional fine-tuning.