reuters / README.md
darrinka's picture
Upload dataset
7ca1d13 verified
|
raw
history blame
3.01 kB
metadata
dataset_info:
  - config_name: default
    features:
      - name: utterance
        dtype: string
      - name: label
        sequence: int64
    splits:
      - name: oos
        num_bytes: 7584422.595703874
        num_examples: 10088
      - name: train
        num_bytes: 26416704
        num_examples: 20856
    download_size: 18117453
    dataset_size: 34001126.59570387
  - config_name: intents
    features:
      - name: id
        dtype: int64
      - name: name
        dtype: string
      - name: tags
        sequence: 'null'
      - name: regexp_full_match
        sequence: 'null'
      - name: regexp_partial_match
        sequence: 'null'
      - name: description
        dtype: 'null'
    splits:
      - name: intents
        num_bytes: 1924
        num_examples: 65
    download_size: 3851
    dataset_size: 1924
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: oos
        path: data/oos-*
  - config_name: intents
    data_files:
      - split: intents
        path: intents/intents-*
task_categories:
  - text-classification
language:
  - en

reuters

This is a text classification dataset. It is intended for machine learning research and experimentation.

This dataset is obtained via formatting another publicly available data to be compatible with our AutoIntent Library.

Usage

It is intended to be used with our AutoIntent Library:

from autointent import Dataset

reuters = Dataset.from_datasets("AutoIntent/reuters")

Source

This dataset is taken from ucirvine/reuters21578 and formatted with our AutoIntent Library:

from collections import defaultdict
from datasets import load_dataset
from autointent import Dataset

# load original data
reuters = load_dataset("ucirvine/reuters21578", "ModHayes", trust_remote_code=True)

# remove low-resource classes
counter = defaultdict(int)
for batch in reuters["train"].iter(batch_size=16):
    for labels in batch["topics"]:
        for lab in labels:
            counter[lab] += 1
names_to_remove = [name for name, cnt in counter.items() if cnt < 10]

intent_names = sorted(set(name for intents in reuters["train"]["topics"] for name in intents))
for n in names_to_remove:
    intent_names.remove(n)
name_to_id = {name: i for i, name in enumerate(intent_names)}

# extract only texts and labels
def transform(example: dict):
    return {
        "utterance": example["text"],
        "label": [name_to_id[intent_name] for intent_name in example["topics"] if intent_name not in names_to_remove],
    }
multilabel_reuters = reuters["train"].map(transform, remove_columns=reuters["train"].features.keys())

# if any out-of-scope samples
res = multilabel_reuters.to_list()
for sample in res:
    if len(sample["label"]) == 0:
        sample.pop("label")

# format
intents = [{"id": i, "name": name} for i, name in enumerate(intent_names)]
reuters_converted = Dataset.from_dict({"intents": intents, "train": res})