File size: 795 Bytes
94aeaa7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
# Summarization Fine-tuning Dataset
A dataset of 2000 examples for fine-tuning small language models on summarization tasks.
## Statistics
- **Total examples**: 2000
- **Train examples**: 1600 (80.0%)
- **Validation examples**: 200 (10.0%)
- **Test examples**: 200 (10.0%)
## Dataset Distribution
| Dataset | Count | Percentage |
|---------|-------|------------|
| xsum | 2000 | 100.0% |
## Format
The dataset is provided in alpaca format.
## Configuration
- **Maximum tokens**: 2000
- **Tokenizer**: gpt2
- **Random seed**: 42
## Usage
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("YOUR_USERNAME/summarization-finetune-10k")
# Access the splits
train_data = dataset["train"]
val_data = dataset["validation"]
test_data = dataset["test"]
```
|