Fine Tuning Large Language Models Llms W Example Code

I’m hosting a (free) live workshop on how to build an AI resume optimizer!
👉 Learn more: maven.com/p/ba0e76/how-to-build-a-resume-optimizer-with-ai?utm_medium=ll_share_link&utm_source=instructor

This is the 5th video in a series on using large language models (LLMs) in practice. Here, I discuss how to fine-tune an existing LLM for a particular use case and walk through a concrete example with Python code.

Resources:
▶️ Series Playlist: youtube.com/playlist?list=PLz-ep5RbHosU2hnz5ejezwaYpdMutMVB0
📰 Read more: towardsdatascience.com/fine-tuning-large-language-models-llms-23473d763b91?sk=fd31e7444cf8f3070d9a843a8218ddad
💻 Example code: github.com/ShawhinT/YouTube-Blog/tree/main/LLMs/fine-tuning
Final Model: huggingface.co/shawhin/distilbert-base-uncased-lora-text-classification
🔢 Dataset: huggingface.co/datasets/shawhin/imdb-truncated

References:
[1] Deeplearning.ai Finetuning Large Langauge Models Short Course: deeplearning.ai/short-courses/finetuning-large-language-models/
[2] arXiv:2005.14165 [cs.CL] (GPT-3 Paper)
[3] arXiv:2303.18223 [cs.CL] (Survey of LLMs)
[4] arXiv:2203.02155 [cs.CL] (InstructGPT paper)
[5] 🤗 PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware: huggingface.co/blog/peft
[6] arXiv:2106.09685 [cs.CL] (LoRA paper)
[7] Original dataset source — Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning Word Vectors for Sentiment Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.

--
Homepage: shawhintalebi.com/
Book a call: calendly.com/shawhintalebi

Intro - 0:00
What is Fine-tuning? - 0:32
Why Fine-tune - 3:29
3 Ways to Fine-tune - 4:25
Supervised Fine-tuning in 5 Steps - 9:04
3 Options for Parameter Tuning - 10:00
Low-Rank Adaptation (LoRA) - 11:37
Example code: Fine-tuning an LLM with LoRA - 15:40
Load Base Model - 16:02
Data Prep - 17:44
Model Evaluation - 21:49
Fine-tuning with LoRA - 24:10
Fine-tuned Model - 26:50