Fine-tuning LLMs on Palmetto
Workshop Description
This workshop series introduces essential concepts related to the fine-tuning of large language models (LLMs), and teaches how to fine-tune an LLM using PyTorch on Palmetto. Topics include: when fine-tuning is appropriate (and when it is not the right solution), parameter-efficient fine-tuning methods vs. full fine-tuning, and what kind and quantity of data is required for fine-tuning. Participants will learn how to efficiently use Palmetto resources to fine-tune (pre-trained) LLMs.
Prerequisites
- All workshop participants should have a Palmetto Cluster account. If you do not already have an account, you can visit our getting started page.
- Participants should be familiar with the Python programming language. This requirement could be fulfilled by personal projects, coursework, or completion of the Introduction to Python Programming workshop series.
- Participants are expected to have experience running LLMs on Palmetto or on their own workstations. These prerequisites could be satisfied by previous participation in either of our workshops: Attention, Transformers, and LLMs or Running LLMs on Palmetto.