Datasets:
metadata
dataset_info:
- config_name: default
features:
- name: utterance
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 763742
num_examples: 13084
download_size: 366002
dataset_size: 763742
- config_name: intents
features:
- name: id
dtype: int64
- name: name
dtype: string
- name: tags
sequence: 'null'
- name: regexp_full_match
sequence: 'null'
- name: regexp_partial_match
sequence: 'null'
- name: description
dtype: 'null'
splits:
- name: intents
num_bytes: 260
num_examples: 7
download_size: 3112
dataset_size: 260
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: intents
data_files:
- split: intents
path: intents/intents-*
task_categories:
- text-classification
language:
- en
snips
This is a text classification dataset. It is intended for machine learning research and experimentation.
This dataset is obtained via formatting another publicly available data to be compatible with our AutoIntent Library.
Usage
It is intended to be used with our AutoIntent Library:
from autointent import Dataset
snips = Dataset.from_datasets("AutoIntent/snips")
Source
This dataset is taken from benayas/snips
and formatted with our AutoIntent Library:
# define util
from datasets import load_dataset
from autointent import Dataset
def convert_snips(snips_train):
intent_names = sorted(snips_train.unique("category"))
name_to_id = dict(zip(intent_names, range(len(intent_names)), strict=False))
n_classes = len(intent_names)
classwise_utterance_records = [[] for _ in range(n_classes)]
intents = [
{
"id": i,
"name": name,
}
for i, name in enumerate(intent_names)
]
for batch in snips_train.iter(batch_size=16, drop_last_batch=False):
for txt, name in zip(batch["text"], batch["category"], strict=False):
intent_id = name_to_id[name]
target_list = classwise_utterance_records[intent_id]
target_list.append({"utterance": txt, "label": intent_id})
utterances = [rec for lst in classwise_utterance_records for rec in lst]
return Dataset.from_dict({"intents": intents, "train": utterances})
# load and format
snips = load_dataset("benayas/snips")
snips_converted = convert_snips(snips["train"])