Summarization Fine-tuning Dataset
A dataset of 2000 examples for fine-tuning small language models on summarization tasks.
Statistics
- Total examples: 2000
- Train examples: 1600 (80.0%)
- Validation examples: 200 (10.0%)
- Test examples: 200 (10.0%)
Dataset Distribution
Dataset | Count | Percentage |
---|---|---|
xsum | 2000 | 100.0% |
Format
The dataset is provided in alpaca format.
Configuration
- Maximum tokens: 2000
- Tokenizer: gpt2
- Random seed: 42
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("YOUR_USERNAME/summarization-finetune-10k")
# Access the splits
train_data = dataset["train"]
val_data = dataset["validation"]
test_data = dataset["test"]