CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
huggingface

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: huggingface/notebooks
Path: blob/main/course/en/chapter11/section4.ipynb
Views: 2935
Kernel: .venv

How to Fine-Tune LLMs with LoRA Adapters using Hugging Face TRL

This notebook demonstrates how to efficiently fine-tune large language models using LoRA (Low-Rank Adaptation) adapters. LoRA is a parameter-efficient fine-tuning technique that:

  • Freezes the pre-trained model weights

  • Adds small trainable rank decomposition matrices to attention layers

  • Typically reduces trainable parameters by ~90%

  • Maintains model performance while being memory efficient

We'll cover:

  1. Setup development environment and LoRA configuration

  2. Create and prepare the dataset for adapter training

  3. Fine-tune using trl and SFTTrainer with LoRA adapters

  4. Test the model and merge adapters (optional)

1. Setup development environment

Our first step is to install Hugging Face Libraries and Pytorch, including trl, transformers and datasets. If you haven't heard of trl yet, don't worry. It is a new library on top of transformers and datasets, which makes it easier to fine-tune, rlhf, align open LLMs.

# Install the requirements in Google Colab # !pip install transformers datasets trl huggingface_hub # Authenticate to Hugging Face from huggingface_hub import login login() # for convenience you can create an environment variable containing your hub token as HF_TOKEN

2. Load the dataset

# Load a sample dataset from datasets import load_dataset # TODO: define your dataset and config using the path and name parameters dataset = load_dataset(path="HuggingFaceTB/smoltalk", name="everyday-conversations") dataset
DatasetDict({ train: Dataset({ features: ['full_topic', 'messages'], num_rows: 2260 }) test: Dataset({ features: ['full_topic', 'messages'], num_rows: 119 }) })

3. Fine-tune LLM using trl and the SFTTrainer with LoRA

The SFTTrainer from trl provides integration with LoRA adapters through the PEFT library. Key advantages of this setup include:

  1. Memory Efficiency:

    • Only adapter parameters are stored in GPU memory

    • Base model weights remain frozen and can be loaded in lower precision

    • Enables fine-tuning of large models on consumer GPUs

  2. Training Features:

    • Native PEFT/LoRA integration with minimal setup

    • Support for QLoRA (Quantized LoRA) for even better memory efficiency

  3. Adapter Management:

    • Adapter weight saving during checkpoints

    • Features to merge adapters back into base model

We'll use LoRA in our example, which combines LoRA with 4-bit quantization to further reduce memory usage without sacrificing performance. The setup requires just a few configuration steps:

  1. Define the LoRA configuration (rank, alpha, dropout)

  2. Create the SFTTrainer with PEFT config

  3. Train and save the adapter weights

# Import necessary libraries from transformers import AutoModelForCausalLM, AutoTokenizer from datasets import load_dataset from trl import SFTConfig, SFTTrainer, setup_chat_format import torch device = ( "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu" ) # Load the model and tokenizer model_name = "HuggingFaceTB/SmolLM2-135M" model = AutoModelForCausalLM.from_pretrained( pretrained_model_name_or_path=model_name ).to(device) tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_name) # Set up the chat format model, tokenizer = setup_chat_format(model=model, tokenizer=tokenizer) # Set our name for the finetune to be saved &/ uploaded to finetune_name = "SmolLM2-FT-MyDataset" finetune_tags = ["smol-course", "module_1"]

The SFTTrainer  supports a native integration with peft, which makes it super easy to efficiently tune LLMs using, e.g. LoRA. We only need to create our LoraConfig and provide it to the trainer.

Exercise: Define LoRA parameters for finetuning

Take a dataset from the Hugging Face hub and finetune a model on it.

Difficulty Levels

🐢 Use the general parameters for an abitrary finetune

🐕 Adjust the parameters and review in weights & biases.

🦁 Adjust the parameters and show change in inference results.

from peft import LoraConfig # TODO: Configure LoRA parameters # r: rank dimension for LoRA update matrices (smaller = more compression) rank_dimension = 6 # lora_alpha: scaling factor for LoRA layers (higher = stronger adaptation) lora_alpha = 8 # lora_dropout: dropout probability for LoRA layers (helps prevent overfitting) lora_dropout = 0.05 peft_config = LoraConfig( r=rank_dimension, # Rank dimension - typically between 4-32 lora_alpha=lora_alpha, # LoRA scaling factor - typically 2x rank lora_dropout=lora_dropout, # Dropout probability for LoRA layers bias="none", # Bias type for LoRA. the corresponding biases will be updated during training. target_modules="all-linear", # Which modules to apply LoRA to task_type="CAUSAL_LM", # Task type for model architecture )

Before we can start our training we need to define the hyperparameters (TrainingArguments) we want to use.

# Training configuration # Hyperparameters based on QLoRA paper recommendations args = SFTConfig( # Output settings output_dir=finetune_name, # Directory to save model checkpoints # Training duration num_train_epochs=1, # Number of training epochs # Batch size settings per_device_train_batch_size=2, # Batch size per GPU gradient_accumulation_steps=2, # Accumulate gradients for larger effective batch # Memory optimization gradient_checkpointing=True, # Trade compute for memory savings # Optimizer settings optim="adamw_torch_fused", # Use fused AdamW for efficiency learning_rate=2e-4, # Learning rate (QLoRA paper) max_grad_norm=0.3, # Gradient clipping threshold # Learning rate schedule warmup_ratio=0.03, # Portion of steps for warmup lr_scheduler_type="constant", # Keep learning rate constant after warmup # Logging and saving logging_steps=10, # Log metrics every N steps save_strategy="epoch", # Save checkpoint every epoch # Precision settings bf16=True, # Use bfloat16 precision # Integration settings push_to_hub=False, # Don't push to HuggingFace Hub report_to="none", # Disable external logging )

We now have every building block we need to create our SFTTrainer to start then training our model.

max_seq_length = 1512 # max sequence length for model and packing of the dataset # Create SFTTrainer with LoRA configuration trainer = SFTTrainer( model=model, args=args, train_dataset=dataset["train"], peft_config=peft_config, # LoRA configuration max_seq_length=max_seq_length, # Maximum sequence length tokenizer=tokenizer, packing=True, # Enable input packing for efficiency dataset_kwargs={ "add_special_tokens": False, # Special tokens handled by template "append_concat_token": False, # No additional separator needed }, )

Start training our model by calling the train() method on our Trainer instance. This will start the training loop and train our model for 3 epochs. Since we are using a PEFT method, we will only save the adapted model weights and not the full model.

# start training, the model will be automatically saved to the hub and the output directory trainer.train() # save model trainer.save_model()
TrainOutput(global_step=72, training_loss=1.6402628521124523, metrics={'train_runtime': 195.2398, 'train_samples_per_second': 1.485, 'train_steps_per_second': 0.369, 'total_flos': 282267289092096.0, 'train_loss': 1.6402628521124523, 'epoch': 0.993103448275862})

The training with Flash Attention for 3 epochs with a dataset of 15k samples took 4:14:36 on a g5.2xlarge. The instance costs 1.21$/h which brings us to a total cost of only ~5.3$.

Merge LoRA Adapter into the Original Model

When using LoRA, we only train adapter weights while keeping the base model frozen. During training, we save only these lightweight adapter weights (~2-10MB) rather than a full model copy. However, for deployment, you might want to merge the adapters back into the base model for:

  1. Simplified Deployment: Single model file instead of base model + adapters

  2. Inference Speed: No adapter computation overhead

  3. Framework Compatibility: Better compatibility with serving frameworks

from peft import AutoPeftModelForCausalLM # Load PEFT model on CPU model = AutoPeftModelForCausalLM.from_pretrained( pretrained_model_name_or_path=args.output_dir, torch_dtype=torch.float16, low_cpu_mem_usage=True, ) # Merge LoRA and base model and save merged_model = model.merge_and_unload() merged_model.save_pretrained( args.output_dir, safe_serialization=True, max_shard_size="2GB" )

3. Test Model and run Inference

After the training is done we want to test our model. We will load different samples from the original dataset and evaluate the model on those samples, using a simple loop and accuracy as our metric.

Bonus Exercise: Load LoRA Adapter

Use what you learnt from the ecample note book to load your trained LoRA adapter for inference.

# free the memory again del model del trainer torch.cuda.empty_cache()
import torch from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer, pipeline # Load Model with PEFT adapter tokenizer = AutoTokenizer.from_pretrained(finetune_name) model = AutoPeftModelForCausalLM.from_pretrained( finetune_name, device_map="auto", torch_dtype=torch.float16 ) pipe = pipeline( "text-generation", model=merged_model, tokenizer=tokenizer, device=device )

Lets test some prompt samples and see how the model performs.

prompts = [ "What is the capital of Germany? Explain why thats the case and if it was different in the past?", "Write a Python function to calculate the factorial of a number.", "A rectangular garden has a length of 25 feet and a width of 15 feet. If you want to build a fence around the entire garden, how many feet of fencing will you need?", "What is the difference between a fruit and a vegetable? Give examples of each.", ] def test_inference(prompt): prompt = pipe.tokenizer.apply_chat_template( [{"role": "user", "content": prompt}], tokenize=False, add_generation_prompt=True, ) outputs = pipe( prompt, ) return outputs[0]["generated_text"][len(prompt) :].strip() for prompt in prompts: print(f" prompt:\n{prompt}") print(f" response:\n{test_inference(prompt)}") print("-" * 50)