Fine-Tuning Examples
1. Llama Fine-Tuning
This example demonstrates how to fine-tune Llama 2 on a Q&A dataset.
1.1 Problem Configuration
- Name:
Llama Finetuning
- Problem Type:
text_causal_classification_modeling
- Model Source:
HF
- Model Name:
meta-llama/Llama-2-7b-hf
- Secrets Blueprint:
HF Meta
(requires a token with access to meta-llama/Llama-2-7b-hf)
1.2 Dataset
- Train Data: finetune_train.csv
- Validation Data: finetune_validation.csv
- Answer Column:
output
- Prompt Column:
instruction
- Validation Size: 0.01
1.3 Output Storage
- Model Storage:
HF
- Model Name:
llama-finetuned
- Store Only LoRA Adapters:
true
- Secrets Blueprint:
Write Token
(token with write access to HF)
1.4 Run Configuration
- Run Title:
Run 01
- Resources:
- Accelerator:
A10G
- GPU Count:
1
- Memory:
64
- Accelerator:
- Tracking:
- Experiment Name:
Llama Finetuning
- API Key: Generate from User > API Keys
- Tracking Mode:
after_epoch
- Experiment Name:
Next Steps
- Fine-Tuning UI – Create your own fine-tuning problems and runs.
- Inference UI – Test your newly fine-tuned model in real-time.
- Benchmarks UI – Compare performance against built-in or custom benchmarks.