metadata
dataset_info:
features:
- name: id
dtype: int64
- name: title
dtype: string
- name: label
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 124825
num_examples: 752
download_size: 63605
dataset_size: 124825
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- zu
size_categories:
- n<1K
The "all-categories" version of https://huggingface.co/datasets/dsfsi/za-isizulu-siswati-news for isizulu news.
Purpose
The paper uses 5-fold cross validation to train models, so here all data is put in the train split.
The original dataset was not loadable at the time by calling the following:
from datasets import load_dataset
dataset = load_dataset("dsfsi/za-isizulu-siswati-news")