Upload 10 files
Browse files- README.md +40 -0
- all_examples.json +0 -0
- dataset_card.json +7 -0
- dataset_info.json +22 -0
- test_alpaca.json +0 -0
- test_raw.json +0 -0
- train_alpaca.json +0 -0
- train_raw.json +0 -0
- val_alpaca.json +0 -0
- val_raw.json +0 -0
README.md
ADDED
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Summarization Fine-tuning Dataset
|
2 |
+
|
3 |
+
A dataset of 2000 examples for fine-tuning small language models on summarization tasks.
|
4 |
+
|
5 |
+
## Statistics
|
6 |
+
|
7 |
+
- **Total examples**: 2000
|
8 |
+
- **Train examples**: 1600 (80.0%)
|
9 |
+
- **Validation examples**: 200 (10.0%)
|
10 |
+
- **Test examples**: 200 (10.0%)
|
11 |
+
|
12 |
+
## Dataset Distribution
|
13 |
+
|
14 |
+
| Dataset | Count | Percentage |
|
15 |
+
|---------|-------|------------|
|
16 |
+
| xsum | 2000 | 100.0% |
|
17 |
+
|
18 |
+
## Format
|
19 |
+
|
20 |
+
The dataset is provided in alpaca format.
|
21 |
+
|
22 |
+
## Configuration
|
23 |
+
|
24 |
+
- **Maximum tokens**: 2000
|
25 |
+
- **Tokenizer**: gpt2
|
26 |
+
- **Random seed**: 42
|
27 |
+
|
28 |
+
## Usage
|
29 |
+
|
30 |
+
```python
|
31 |
+
from datasets import load_dataset
|
32 |
+
|
33 |
+
# Load the dataset
|
34 |
+
dataset = load_dataset("YOUR_USERNAME/summarization-finetune-10k")
|
35 |
+
|
36 |
+
# Access the splits
|
37 |
+
train_data = dataset["train"]
|
38 |
+
val_data = dataset["validation"]
|
39 |
+
test_data = dataset["test"]
|
40 |
+
```
|
all_examples.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
dataset_card.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"language": [
|
3 |
+
"en"
|
4 |
+
],
|
5 |
+
"license": "cc-by-4.0",
|
6 |
+
"pretty_name": "Summarization Fine-tuning Dataset"
|
7 |
+
}
|
dataset_info.json
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"name": "Summarization Fine-tuning Dataset",
|
3 |
+
"description": "A dataset for fine-tuning small language models on summarization tasks",
|
4 |
+
"format": "alpaca",
|
5 |
+
"statistics": {
|
6 |
+
"total_examples": 2000,
|
7 |
+
"train_examples": 1600,
|
8 |
+
"val_examples": 200,
|
9 |
+
"test_examples": 200,
|
10 |
+
"dataset_distribution": {
|
11 |
+
"xsum": {
|
12 |
+
"count": 2000,
|
13 |
+
"percentage": 100.0
|
14 |
+
}
|
15 |
+
}
|
16 |
+
},
|
17 |
+
"configuration": {
|
18 |
+
"max_tokens": 2000,
|
19 |
+
"tokenizer": "gpt2",
|
20 |
+
"seed": 42
|
21 |
+
}
|
22 |
+
}
|
test_alpaca.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
test_raw.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
train_alpaca.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
train_raw.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
val_alpaca.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
val_raw.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|