LLM Visuals

Home | Chat Templates | Data Types | LoRA | Quantization | SFT


Supervised Fine-Tuning (SFT)

Shield: CC BY 4.0

These images were originally published in the book “A Hands-On Guide to Fine-Tuning LLMs with PyTorch and Hugging Face”.

They are also available at the book’s official repository: https://github.com/dvgodoy/FineTuningLLMs.

Index

** CLICK ON THE IMAGES FOR FULL SIZE **

Training Loop

Attention

Without LoRA

Quantized Optimizer

LoRA

Activations

Formulas

Memory

Stochastic Rounding

This work is licensed under a Creative Commons Attribution 4.0 International License.

CC BY 4.0