wikineural-multilingual-ner
This model was trained from scratch on the wnut_17 dataset.
Model description
The purpose of this model is to practice Hugging Face Tutorial (Token Classification). Please find the original source here:https://huggingface.co/learn/nlp-course/chapter7/2?fw=pt#a-custom-training-loop I used wikineural-multilingual-ner and wnut_17 dataset.(https://huggingface.co/Babelscape/wikineural-multilingual-ner), (https://huggingface.co/datasets/wnut_17)
Intended uses & limitations
More information needed
Training and evaluation data
wnut_17 dataset.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Framework versions
- Transformers 4.27.0.dev0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
- Downloads last month
- 8
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.