metadata
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
splits:
- name: train
num_bytes: 127019
num_examples: 915
- name: validation
num_bytes: 121393
num_examples: 946
- name: test
num_bytes: 130972
num_examples: 952
download_size: 120493
dataset_size: 379384
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
language: br
task_categories:
- token-classification
Version nettoyée de WikiAnn.
En effet, la version originale contenait des leaks et des duplications.
De 1000 effectifs par split, la nouvelle répartition devient alors la suivante :
DatasetDict({
train: Dataset({
features: ['tokens', 'ner_tags'],
num_rows: 915
})
validation: Dataset({
features: ['tokens', 'ner_tags'],
num_rows: 946
})
test: Dataset({
features: ['tokens', 'ner_tags'],
num_rows: 952
})
})