I’m hosting a (free) live workshop on how to build an AI resume optimizer!
👉 Learn more: maven.com/p/ba0e76/how-to-build-a-resume-optimizer-with-ai?utm_medium=ll_share_link&utm_source=instructor
This is the 5th video in a series on using large language models (LLMs) in practice. Here, I discuss how to fine-tune an existing LLM for a particular use case and walk through a concrete example with Python code.
Resources:
▶️ Series Playlist: youtube.com/playlist?list=PLz-ep5RbHosU2hnz5ejezwaYpdMutMVB0
📰 Read more: towardsdatascience.com/fine-tuning-large-language-models-llms-23473d763b91?sk=fd31e7444cf8f3070d9a843a8218ddad
💻 Example code: github.com/ShawhinT/YouTube-Blog/tree/main/LLMs/fine-tuning
Final Model: huggingface.co/shawhin/distilbert-base-uncased-lora-text-classification
🔢 Dataset: huggingface.co/datasets/shawhin/imdb-truncated
References:
[1] Deeplearning.ai Finetuning Large Langauge Models Short Course: deeplearning.ai/short-courses/finetuning-large-language-models/
[2] arXiv:2005.14165 [cs.CL] (GPT-3 Paper)
[3] arXiv:2303.18223 [cs.CL] (Survey of LLMs)
[4] arXiv:2203.02155 [cs.CL] (InstructGPT paper)
[5] 🤗 PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware: huggingface.co/blog/peft
[6] arXiv:2106.09685 [cs.CL] (LoRA paper)
[7] Original dataset source — Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning Word Vectors for Sentiment Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
--
Homepage: shawhintalebi.com/
Book a call: calendly.com/shawhintalebi
Intro - 0:00
What is Fine-tuning? - 0:32
Why Fine-tune - 3:29
3 Ways to Fine-tune - 4:25
Supervised Fine-tuning in 5 Steps - 9:04
3 Options for Parameter Tuning - 10:00
Low-Rank Adaptation (LoRA) - 11:37
Example code: Fine-tuning an LLM with LoRA - 15:40
Load Base Model - 16:02
Data Prep - 17:44
Model Evaluation - 21:49
Fine-tuning with LoRA - 24:10
Fine-tuned Model - 26:50
- Fine-tuning Large Language Models (LLMs) | w/ Example Code ( Download)
- Fine-tuning Large Language Models (LLMs) | w/ Full Code ( Download)
- Fine-tuning LLMs with PEFT and LoRA ( Download)
- okay, but I want GPT to perform 10x for my specific use case - Here is how ( Download)
- Fine Tuning LLM Models – Generative AI Course ( Download)
- How Large Language Models Work ( Download)
- How to Fine-Tune your Large Language Models (LLMs) ( Download)
- What is LoRA Low-Rank Adaptation for finetuning LLMs EXPLAINED ( Download)
- Empowering Faculty with AI Workshop 2: Effective Communication with LLMs ( Download)
- Introduction to large language models ( Download)
- Prompt Engineering, RAG, and Fine-tuning: Benefits and When to Use ( Download)
- Fine-Tuning Large Language Models (LLMs) ( Download)
- okay, but I want Llama 3 for my specific use case - Here's how ( Download)
- EASIEST Way to Fine-Tune a LLM and Use It With Ollama ( Download)
- What is Prompt Tuning ( Download)