Skip to main content

Fine-tuning LLMs on Palmetto

Workshop Description

This workshop series introduces essential concepts related to the fine-tuning of large language models (LLMs), and teaches how to fine-tune an LLM using PyTorch on Palmetto. Topics include: when fine-tuning is appropriate (and when it is not the right solution), parameter-efficient fine-tuning methods vs. full fine-tuning, and what kind and quantity of data is required for fine-tuning. Participants will learn how to efficiently use Palmetto resources to fine-tune (pre-trained) LLMs. Prerequisites: Participants are expected to have experience running LLMs on Palmetto or on their own workstations. These prerequisites could be satisfied by previous participation in either of our workshops: Attention, Transformers, and LLMs or Running LLMs on Palmetto.

Session Information

Session #1 for Spring 2025

Date: Friday, February 28, 2025
Time: 1:00 PM — 3:30 PM (2 hours 30 minutes)
Location: Cooper 204