Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ dataset_info:
|
|
14 |
num_bytes: 7584422.595703874
|
15 |
num_examples: 10088
|
16 |
download_size: 9002595
|
17 |
-
dataset_size: 15680087
|
18 |
- config_name: intents
|
19 |
features:
|
20 |
- name: id
|
@@ -46,4 +46,68 @@ configs:
|
|
46 |
data_files:
|
47 |
- split: intents
|
48 |
path: intents/intents-*
|
|
|
|
|
|
|
|
|
49 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
num_bytes: 7584422.595703874
|
15 |
num_examples: 10088
|
16 |
download_size: 9002595
|
17 |
+
dataset_size: 15680087
|
18 |
- config_name: intents
|
19 |
features:
|
20 |
- name: id
|
|
|
46 |
data_files:
|
47 |
- split: intents
|
48 |
path: intents/intents-*
|
49 |
+
task_categories:
|
50 |
+
- text-classification
|
51 |
+
language:
|
52 |
+
- en
|
53 |
---
|
54 |
+
|
55 |
+
# banking77
|
56 |
+
|
57 |
+
This is a text classification dataset. It is intended for machine learning research and experimentation.
|
58 |
+
|
59 |
+
This dataset is obtained via formatting another publicly available data to be compatible with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html).
|
60 |
+
|
61 |
+
## Usage
|
62 |
+
|
63 |
+
It is intended to be used with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):
|
64 |
+
|
65 |
+
```python
|
66 |
+
from autointent import Dataset
|
67 |
+
|
68 |
+
dream = Dataset.from_datasets("AutoIntent/reuters")
|
69 |
+
```
|
70 |
+
|
71 |
+
## Source
|
72 |
+
|
73 |
+
This dataset is taken from `ucirvine/reuters21578` and formatted with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):
|
74 |
+
|
75 |
+
```python
|
76 |
+
from collections import defaultdict
|
77 |
+
from datasets import load_dataset
|
78 |
+
from autointent import Dataset
|
79 |
+
|
80 |
+
# load original data
|
81 |
+
reuters = load_dataset("ucirvine/reuters21578", "ModHayes", trust_remote_code=True)
|
82 |
+
|
83 |
+
# remove low-resource classes
|
84 |
+
counter = defaultdict(int)
|
85 |
+
for batch in reuters["train"].iter(batch_size=16):
|
86 |
+
for labels in batch["topics"]:
|
87 |
+
for lab in labels:
|
88 |
+
counter[lab] += 1
|
89 |
+
names_to_remove = [name for name, cnt in counter.items() if cnt < 10]
|
90 |
+
|
91 |
+
intent_names = sorted(set(name for intents in reuters["train"]["topics"] for name in intents))
|
92 |
+
for n in names_to_remove:
|
93 |
+
intent_names.remove(n)
|
94 |
+
name_to_id = {name: i for i, name in enumerate(intent_names)}
|
95 |
+
|
96 |
+
# extract only texts and labels
|
97 |
+
def transform(example: dict):
|
98 |
+
return {
|
99 |
+
"utterance": example["text"],
|
100 |
+
"label": [name_to_id[intent_name] for intent_name in example["topics"] if intent_name not in names_to_remove],
|
101 |
+
}
|
102 |
+
multilabel_reuters = reuters["train"].map(transform, remove_columns=reuters["train"].features.keys())
|
103 |
+
|
104 |
+
# if any out-of-scope samples
|
105 |
+
res = multilabel_reuters.to_list()
|
106 |
+
for sample in res:
|
107 |
+
if len(sample["label"]) == 0:
|
108 |
+
sample.pop("label")
|
109 |
+
|
110 |
+
# format
|
111 |
+
intents = [{"id": i, "name": name} for i, name in enumerate(intent_names)]
|
112 |
+
reuters_converted = Dataset.from_dict({"intents": intents, "train": res})
|
113 |
+
```
|