Benchmarking Llama 3 70B for Code Generation: A Comprehensive Evaluation
Pınar Ersoy
Dataroid
https://orcid.org/0000-0001-9591-3037
Mustafa Erşahin
Commencis
https://orcid.org/0000-0003-4318-8288
DOI: https://doi.org/10.56038/oprd.v4i1.444
Keywords: Large Language Models, Llama 3 70B, PyTorch FSDP, Q-LoRA
Abstract
This study benchmarks the capabilities of Llama 3 70B, a 70-billion parameter large language model (LLM), for code generation tasks. To effectively train and fine-tune this massive model, we integrate PyTorch Fully Sharded Data Parallel (FSDP) [1], [2] for distributed training and Quantized Low-Rank Adaptation (Q-LoRA) [7] for efficient fine-tuning. We address challenges associated with distributed training, including communication overhead and synchronization complexities, through optimization strategies like gradient accumulation, optimizer state sharding, and mixed precision training. Additionally, we employ advanced training techniques such as Curriculum Learning, Dynamic Batch Sizing, and Adaptive Optimization Algorithms to enhance model performance and training efficiency. Our primary focus is evaluating the performance of the fine-tuned Llama 3 70B model on two widely-recognized code generation benchmarks: HumanEval [8] and MBPP [9]. HumanEval assesses the model's ability to translate natural language problem descriptions into functionally correct code, while MBPP evaluates its proficiency in solving complex programming problems by generating accurate Python code. We present detailed performance results on these benchmarks, analyzing the model's strengths and limitations in various code generation scenarios. Furthermore, we compare the impact of our training and fine-tuning methodologies on scalability, memory efficiency, and training speed, demonstrating the feasibility and efficiency of our approach. This benchmark study offers valuable insights for researchers and practitioners exploring the application of LLMs for code generation. It provides a comprehensive evaluation of Llama 3 70B's capabilities, sheds light on the effectiveness of various training and fine-tuning techniques, and emphasizes the importance of rigorous benchmark evaluation in driving progress within this rapidly evolving field.
References
PyTorch FSDP Documentation, [Online]. Available: https://pytorch.org/docs/stable/fsdp.html
PyTorch FSDP Tutorial, [Online]. Available: https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html
Megatron-LM Usage Guide, [Online]. Available: https://huggingface.co/docs/accelerate/en/usage_guides/megatron_lm
NVIDIA Megatron-LM, [Online]. Available: https://github.com/NVIDIA/Megatron-LM
DeepSpeed, [Online]. Available: https://www.deepspeed.ai/
Llama 2: Open Foundation and Fine-Tuned Chat Models, [Online]. Available: https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/
Q-LoRA: Efficient Finetuning of Quantized LLMs, [Online]. Available: https://arxiv.org/abs/2305.14314
OpenAI Codex, [Online]. Available: https://openai.com/blog/openai-codex/
MBPP: A Modular Benchmark for Python Programming, [Online]. Available: https://github.com/google-research/google-research/tree/master/mbpp
Introducing Code Llama, a state-of-the-art large language model for coding [Online]. Available: https://ai.meta.com/blog/code-llama-large-language-model-coding/
Introducing Meta Llama 3: The most capable openly available LLM to date, [Online]. Available: https://ai.meta.com/blog/meta-llama-3/