Section 01
Cornell CS4782 Course Project: Guide to Reproduction and Validation of the LoRA Low-Rank Adaptation Method
This project is the final project for Cornell University's CS4782 course, aiming to reproduce the core experiments of the LoRA (Low-Rank Adaptation) paper and validate its effectiveness in parameter-efficient fine-tuning. Based on the GPT-2 Small model and E2E NLG dataset, the results show that LoRA can achieve performance close to full fine-tuning using only 0.06% to 0.24% of trainable parameters, providing empirical support for the efficient adaptation of large language models.