modelId
stringlengths 4
81
| tags
sequence | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Cryptikdw/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Cthyllax/DialoGPT-medium-PaladinDanse | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- charly/autotrain-data-sentiment-4
co2_eq_emissions: 0.007597570744740809
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 812425472
- CO2 Emissions (in grams): 0.007597570744740809
## Validation Metrics
- Loss: 0.5105093121528625
- Accuracy: 0.8268156424581006
- Macro F1: 0.6020923520923521
- Micro F1: 0.8268156424581006
- Weighted F1: 0.8021395116367184
- Macro Precision: 0.5907986111111111
- Micro Precision: 0.8268156424581006
- Weighted Precision: 0.7792248603351954
- Macro Recall: 0.6141625496464206
- Micro Recall: 0.8268156424581006
- Weighted Recall: 0.8268156424581006
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/charly/autotrain-sentiment-4-812425472
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("charly/autotrain-sentiment-4-812425472", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("charly/autotrain-sentiment-4-812425472", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8967889908256881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5963
- Accuracy: 0.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.247 | 1.0 | 1404 | 0.3629 | 0.8865 |
| 0.1532 | 2.0 | 2808 | 0.3945 | 0.8979 |
| 0.0981 | 3.0 | 4212 | 0.4206 | 0.9025 |
| 0.0468 | 4.0 | 5616 | 0.5358 | 0.9014 |
| 0.0313 | 5.0 | 7020 | 0.5963 | 0.8968 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_ancc | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8344
- Wer: 0.6055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0927 | 13.89 | 500 | 2.7346 | 1.0 |
| 0.9983 | 27.78 | 1000 | 0.8344 | 0.6055 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_ekkicc | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | This is CaiT model from [1]. It was first implemented in TensorFlow and then the original parameters from [2] were ported into the implementation. Refer to [3] for more details.
## References
[1] Going deeper with Image Transformers: https://arxiv.org/abs/2103.17239
[2] CaiT GitHub: https://github.com/facebookresearch/deit
[3] CaiT-TF GitHub: https://github.com/sayakpaul/cait-tf |
Culmenus/opus-mt-de-is-finetuned-de-to-is_nr2-finetuned-de-to-is_nr2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-05-02T03:40:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ko-en-finetuned-ko-to-en5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ko-en-finetuned-ko-to-en5
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1434
- Bleu: 52.6052
- Gen Len: 8.1982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 105 | 1.8436 | 35.225 | 8.1735 |
| No log | 2.0 | 210 | 1.4106 | 44.7159 | 8.1923 |
| No log | 3.0 | 315 | 1.2410 | 49.5117 | 8.2165 |
| No log | 4.0 | 420 | 1.1661 | 51.8883 | 8.201 |
| 1.8123 | 5.0 | 525 | 1.1434 | 52.6052 | 8.1982 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_nr2 | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2022-05-02T03:48:03Z | For testing it yourself, the easiest way is using the colab link below.
Github repo: https://github.com/mephisto121/Chemical_explosion_classification
[](https://colab.research.google.com/drive/1GQmh1g2bRdqgQCnM6b_iY-eAQCRfhMJP?usp=sharing) |
CuongLD/wav2vec2-large-xlsr-vietnamese | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"vi",
"dataset:common_voice, infore_25h",
"arxiv:2006.11477",
"arxiv:2006.13979",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- crcb/autotrain-data-go_emo_new
co2_eq_emissions: 20.58663910106142
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 813325491
- CO2 Emissions (in grams): 20.58663910106142
## Validation Metrics
- Loss: 1.3628994226455688
- Accuracy: 0.5920355494787216
- Macro F1: 0.4844439507523978
- Micro F1: 0.5920355494787216
- Weighted F1: 0.5873137663478112
- Macro Precision: 0.5458988948121151
- Micro Precision: 0.5920355494787216
- Weighted Precision: 0.591386299522425
- Macro Recall: 0.4753100798358001
- Micro Recall: 0.5920355494787216
- Weighted Recall: 0.5920355494787216
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crcb/autotrain-go_emo_new-813325491
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("crcb/autotrain-go_emo_new-813325491", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("crcb/autotrain-go_emo_new-813325491", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
CurtisBowser/DialoGPT-medium-sora-three | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: pl
license: cc-by-sa-4.0
datasets:
- 18th and 19th century articles mentioning Japan
---
# Model for detection of Orientalization of Japan in newspaper articles
This model was based on the original [HerBERT](https://huggingface.co/allegro/herbert-base-cased) Base.
The model was finetuned on a set of Polish press articles mentioning Japan from the years 1818-1939 to recognize if an article (or any input text) presents a genuine description of Japan, or whether it is an example of the orientalization of Japan. By orientalization we mean when a text represents Japan through the lenses of [Orientalism](https://en.wikipedia.org/wiki/Orientalism), or a viewpoint that presents a piece of Eastern culture, in this case - Japan - in a deformed, distorted, and idealized form. In the definition of Orientalism we follow the work by [Edward Said](https://en.wikipedia.org/wiki/Edward_Said), especially his book "[Orientalism](https://en.wikipedia.org/wiki/Orientalism_(book))".
## Licenses
The finetuned model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License.
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a>
## Citations
Please, cite this model using the following citation.
```
@inproceedings{ptaszynski2022herbert-japan,
title={Finetuned HerBERT model for detecting orientalization of Japan in newspaper articles},
author={Michal Ptaszynski and Pawel Dybala and Zuzanna Barczyk},
booktitle={HuggingFace},
url={https://huggingface.co/ptaszynski/japan-topic-detection}
year={2022}
}
``` |
CyberMuffin/DialoGPT-small-ChandlerBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- spacy
- token-classification
language:
- sv
license: cc-by-sa-4.0
model-index:
- name: sv_core_news_sm
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.798119469
- name: NER Recall
type: recall
value: 0.702189781
- name: NER F Score
type: f_score
value: 0.7470877556
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9309992855
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9474328876
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9386546902
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9479432479
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.8139719203
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.759057971
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9218900675
---
### Details: https://spacy.io/models/sv#sv_core_news_sm
Swedish pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner.
| Feature | Description |
| --- | --- |
| **Name** | `sv_core_news_sm` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [UD Swedish Talbanken v2.8](https://github.com/UniversalDependencies/UD_Swedish-Talbanken) (Nivre, Joakim; Smith, Aaron)<br />[Stockholm-Umeå Corpus (SUC) v3.0](https://huggingface.co/datasets/KBLab/sucx3_ner) (Språkbanken) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (381 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `AB`, `AB\|AN`, `AB\|KOM`, `AB\|POS`, `AB\|SMS`, `AB\|SUV`, `DT\|NEU\|SIN\|DEF`, `DT\|NEU\|SIN\|IND`, `DT\|NEU\|SIN\|IND/DEF`, `DT\|UTR/NEU\|PLU\|DEF`, `DT\|UTR/NEU\|PLU\|IND`, `DT\|UTR/NEU\|PLU\|IND/DEF`, `DT\|UTR/NEU\|SIN/PLU\|IND`, `DT\|UTR/NEU\|SIN\|DEF`, `DT\|UTR/NEU\|SIN\|IND`, `DT\|UTR\|SIN\|DEF`, `DT\|UTR\|SIN\|IND`, `DT\|UTR\|SIN\|IND/DEF`, `HA`, `HD\|NEU\|SIN\|IND`, `HD\|UTR/NEU\|PLU\|IND`, `HD\|UTR\|SIN\|IND`, `HP\|-\|-\|-`, `HP\|NEU\|SIN\|IND`, `HP\|UTR/NEU\|PLU\|IND`, `HP\|UTR\|SIN\|IND`, `HS\|DEF`, `IE`, `IN`, `JJ`, `JJ\|AN`, `JJ\|KOM\|UTR/NEU\|SIN/PLU\|IND/DEF\|NOM`, `JJ\|POS\|MAS\|SIN\|DEF\|GEN`, `JJ\|POS\|MAS\|SIN\|DEF\|NOM`, `JJ\|POS\|NEU\|SIN\|IND/DEF\|NOM`, `JJ\|POS\|NEU\|SIN\|IND\|NOM`, `JJ\|POS\|UTR/NEU\|PLU\|IND/DEF\|GEN`, `JJ\|POS\|UTR/NEU\|PLU\|IND/DEF\|NOM`, `JJ\|POS\|UTR/NEU\|PLU\|IND\|NOM`, `JJ\|POS\|UTR/NEU\|SIN/PLU\|IND/DEF\|NOM`, `JJ\|POS\|UTR/NEU\|SIN\|DEF\|NOM`, `JJ\|POS\|UTR\|-\|-\|SMS`, `JJ\|POS\|UTR\|SIN\|IND/DEF\|NOM`, `JJ\|POS\|UTR\|SIN\|IND\|GEN`, `JJ\|POS\|UTR\|SIN\|IND\|NOM`, `JJ\|SUV\|MAS\|SIN\|DEF\|NOM`, `JJ\|SUV\|UTR/NEU\|PLU\|DEF\|NOM`, `JJ\|SUV\|UTR/NEU\|SIN/PLU\|DEF\|NOM`, `JJ\|SUV\|UTR/NEU\|SIN/PLU\|IND\|NOM`, `KN`, `MAD`, `MID`, `NN`, `NN\|-\|-\|-\|-`, `NN\|AN`, `NN\|NEU\|-\|-\|SMS`, `NN\|NEU\|PLU\|DEF\|GEN`, `NN\|NEU\|PLU\|DEF\|NOM`, `NN\|NEU\|PLU\|IND\|GEN`, `NN\|NEU\|PLU\|IND\|NOM`, `NN\|NEU\|SIN\|DEF\|GEN`, `NN\|NEU\|SIN\|DEF\|NOM`, `NN\|NEU\|SIN\|IND`, `NN\|NEU\|SIN\|IND\|GEN`, `NN\|NEU\|SIN\|IND\|NOM`, `NN\|SMS`, `NN\|UTR\|-\|-\|-`, `NN\|UTR\|-\|-\|SMS`, `NN\|UTR\|PLU\|DEF\|GEN`, `NN\|UTR\|PLU\|DEF\|NOM`, `NN\|UTR\|PLU\|IND\|GEN`, `NN\|UTR\|PLU\|IND\|NOM`, `NN\|UTR\|SIN\|DEF\|GEN`, `NN\|UTR\|SIN\|DEF\|NOM`, `NN\|UTR\|SIN\|IND\|GEN`, `NN\|UTR\|SIN\|IND\|NOM`, `PAD`, `PC\|PRF\|NEU\|SIN\|IND\|NOM`, `PC\|PRF\|UTR/NEU\|PLU\|IND/DEF\|GEN`, `PC\|PRF\|UTR/NEU\|PLU\|IND/DEF\|NOM`, `PC\|PRF\|UTR/NEU\|SIN\|DEF\|NOM`, `PC\|PRF\|UTR\|SIN\|IND\|NOM`, `PC\|PRS\|UTR/NEU\|SIN/PLU\|IND/DEF\|NOM`, `PL`, `PM`, `PM\|GEN`, `PM\|NOM`, `PM\|SMS`, `PN\|MAS\|SIN\|DEF\|SUB/OBJ`, `PN\|NEU\|SIN\|DEF`, `PN\|NEU\|SIN\|DEF\|SUB/OBJ`, `PN\|NEU\|SIN\|IND\|SUB/OBJ`, `PN\|UTR/NEU\|PLU\|DEF\|OBJ`, `PN\|UTR/NEU\|PLU\|DEF\|SUB`, `PN\|UTR/NEU\|PLU\|DEF\|SUB/OBJ`, `PN\|UTR/NEU\|PLU\|IND\|SUB/OBJ`, `PN\|UTR/NEU\|SIN/PLU\|DEF\|OBJ`, `PN\|UTR\|PLU\|DEF\|OBJ`, `PN\|UTR\|PLU\|DEF\|SUB`, `PN\|UTR\|SIN\|DEF\|NOM`, `PN\|UTR\|SIN\|DEF\|OBJ`, `PN\|UTR\|SIN\|DEF\|SUB`, `PN\|UTR\|SIN\|DEF\|SUB/OBJ`, `PN\|UTR\|SIN\|IND\|NOM`, `PN\|UTR\|SIN\|IND\|SUB`, `PN\|UTR\|SIN\|IND\|SUB/OBJ`, `PP`, `PS\|NEU\|SIN\|DEF`, `PS\|UTR/NEU\|PLU\|DEF`, `PS\|UTR/NEU\|SIN/PLU\|DEF`, `PS\|UTR\|SIN\|DEF`, `RG\|NEU\|SIN\|IND\|NOM`, `RG\|NOM`, `RG\|SMS`, `RG\|UTR\|SIN\|IND\|NOM`, `RO\|MAS\|SIN\|IND/DEF\|NOM`, `RO\|NOM`, `SN`, `UO`, `VB\|AN`, `VB\|IMP\|AKT`, `VB\|IMP\|SFO`, `VB\|INF\|AKT`, `VB\|INF\|SFO`, `VB\|KON\|PRS\|AKT`, `VB\|KON\|PRT\|AKT`, `VB\|PRS\|AKT`, `VB\|PRS\|SFO`, `VB\|PRT\|AKT`, `VB\|PRT\|SFO`, `VB\|SUP\|AKT`, `VB\|SUP\|SFO`, `_SP` |
| **`morphologizer`** | `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `POS=ADP`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `POS=PUNCT`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Abbr=Yes\|POS=ADV`, `POS=SCONJ`, `POS=ADV`, `Case=Nom\|Definite=Ind\|Gender=Com\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PART`, `POS=VERB\|VerbForm=Inf`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=CCONJ`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PRON\|PronType=Rel`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Degree=Pos\|POS=ADV`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=VERB\|VerbForm=Sup\|Voice=Act`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=PART\|Polarity=Neg`, `Case=Nom\|Degree=Pos\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Degree=Sup\|POS=ADV`, `Case=Nom\|NumType=Card\|POS=NUM`, `Abbr=Yes\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Sup\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `POS=AUX\|VerbForm=Sup\|Voice=Act`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Rcp`, `POS=SPACE`, `POS=VERB\|VerbForm=Sup\|Voice=Pass`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|POS=ADJ`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|POS=DET\|PronType=Ind`, `Case=Nom\|Definite=Def\|Number=Sing\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Case=Nom\|POS=ADJ\|Tense=Pres\|VerbForm=Part`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Dem`, `Definite=Def\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `POS=NOUN`, `Case=Nom\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Tot`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Ind`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Gender=Com\|POS=NOUN`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Def\|POS=PRON\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Nom\|POS=PROPN`, `Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Plur\|POS=PRON\|PronType=Prs`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Definite=Def\|Gender=Com\|Number=Plur\|POS=PRON\|PronType=Prs`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Rel`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Int`, `Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|POS=PROPN`, `POS=PROPN`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Int`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Neut\|POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Int`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Neg`, `POS=VERB\|VerbForm=Sup`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|POS=ADJ`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=SYM`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Definite=Ind\|Degree=Sup\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Dem`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Neg`, `Mood=Sub\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Degree=Pos\|Gender=Com\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Ind\|POS=DET\|PronType=Prs`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Rel`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Definite=Def\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Ind`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Prs`, `Abbr=Yes\|POS=ADJ`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Rel`, `NumType=Card\|POS=NUM`, `POS=INTJ`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Int`, `Degree=Sup\|POS=ADV\|Polarity=Neg`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Int`, `POS=ADV\|Polarity=Neg`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Ind`, `POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Prs`, `Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Def\|POS=PRON\|PronType=Ind`, `Foreign=Yes\|POS=ADP`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Dem`, `Abbr=Yes\|Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Rel`, `Foreign=Yes\|POS=CCONJ`, `POS=DET\|PronType=Art`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Prs`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Tot`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Tot`, `Degree=Pos\|POS=ADV\|Polarity=Neg`, `Mood=Sub\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=PRON\|PronType=Ind`, `Definite=Ind\|POS=DET\|PronType=Neg`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Neg`, `POS=CCONJ\|Polarity=Neg`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Tot`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Tot`, `Mood=Imp\|POS=AUX\|VerbForm=Fin\|Voice=Act`, `Foreign=Yes\|POS=ADV`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Rcp`, `Case=Acc\|Definite=Def\|POS=PRON\|Polarity=Neg\|PronType=Ind` |
| **`parser`** | `ROOT`, `acl`, `acl:cleft`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `csubj`, `dep`, `det`, `dislocated`, `expl`, `fixed`, `flat:name`, `iobj`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `parataxis`, `punct`, `xcomp` |
| **`ner`** | `EVN`, `LOC`, `MSR`, `OBJ`, `ORG`, `PRS`, `TME`, `WRK` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.99 |
| `TOKEN_P` | 99.95 |
| `TOKEN_R` | 99.96 |
| `TOKEN_F` | 99.95 |
| `TAG_ACC` | 93.10 |
| `POS_ACC` | 94.74 |
| `MORPH_ACC` | 93.87 |
| `MORPH_MICRO_P` | 95.68 |
| `MORPH_MICRO_R` | 95.59 |
| `MORPH_MICRO_F` | 95.64 |
| `SENTS_P` | 89.68 |
| `SENTS_R` | 94.84 |
| `SENTS_F` | 92.19 |
| `DEP_UAS` | 81.40 |
| `DEP_LAS` | 75.91 |
| `LEMMA_ACC` | 94.79 |
| `ENTS_P` | 79.81 |
| `ENTS_R` | 70.22 |
| `ENTS_F` | 74.71 | |
Cyrell/Cyrell | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- spacy
- token-classification
language:
- sv
license: cc-by-sa-4.0
model-index:
- name: sv_core_news_md
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8516666667
- name: NER Recall
type: recall
value: 0.7459854015
- name: NER F Score
type: f_score
value: 0.7953307393
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9482494641
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9606001837
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9541696438
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9557007247
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.8339750849
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.7849377123
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9126213592
---
### Details: https://spacy.io/models/sv#sv_core_news_md
Swedish pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner.
| Feature | Description |
| --- | --- |
| **Name** | `sv_core_news_md` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | floret (50000, 300) |
| **Sources** | [UD Swedish Talbanken v2.8](https://github.com/UniversalDependencies/UD_Swedish-Talbanken) (Nivre, Joakim; Smith, Aaron)<br />[Stockholm-Umeå Corpus (SUC) v3.0](https://huggingface.co/datasets/KBLab/sucx3_ner) (Språkbanken)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (381 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `AB`, `AB\|AN`, `AB\|KOM`, `AB\|POS`, `AB\|SMS`, `AB\|SUV`, `DT\|NEU\|SIN\|DEF`, `DT\|NEU\|SIN\|IND`, `DT\|NEU\|SIN\|IND/DEF`, `DT\|UTR/NEU\|PLU\|DEF`, `DT\|UTR/NEU\|PLU\|IND`, `DT\|UTR/NEU\|PLU\|IND/DEF`, `DT\|UTR/NEU\|SIN/PLU\|IND`, `DT\|UTR/NEU\|SIN\|DEF`, `DT\|UTR/NEU\|SIN\|IND`, `DT\|UTR\|SIN\|DEF`, `DT\|UTR\|SIN\|IND`, `DT\|UTR\|SIN\|IND/DEF`, `HA`, `HD\|NEU\|SIN\|IND`, `HD\|UTR/NEU\|PLU\|IND`, `HD\|UTR\|SIN\|IND`, `HP\|-\|-\|-`, `HP\|NEU\|SIN\|IND`, `HP\|UTR/NEU\|PLU\|IND`, `HP\|UTR\|SIN\|IND`, `HS\|DEF`, `IE`, `IN`, `JJ`, `JJ\|AN`, `JJ\|KOM\|UTR/NEU\|SIN/PLU\|IND/DEF\|NOM`, `JJ\|POS\|MAS\|SIN\|DEF\|GEN`, `JJ\|POS\|MAS\|SIN\|DEF\|NOM`, `JJ\|POS\|NEU\|SIN\|IND/DEF\|NOM`, `JJ\|POS\|NEU\|SIN\|IND\|NOM`, `JJ\|POS\|UTR/NEU\|PLU\|IND/DEF\|GEN`, `JJ\|POS\|UTR/NEU\|PLU\|IND/DEF\|NOM`, `JJ\|POS\|UTR/NEU\|PLU\|IND\|NOM`, `JJ\|POS\|UTR/NEU\|SIN/PLU\|IND/DEF\|NOM`, `JJ\|POS\|UTR/NEU\|SIN\|DEF\|NOM`, `JJ\|POS\|UTR\|-\|-\|SMS`, `JJ\|POS\|UTR\|SIN\|IND/DEF\|NOM`, `JJ\|POS\|UTR\|SIN\|IND\|GEN`, `JJ\|POS\|UTR\|SIN\|IND\|NOM`, `JJ\|SUV\|MAS\|SIN\|DEF\|NOM`, `JJ\|SUV\|UTR/NEU\|PLU\|DEF\|NOM`, `JJ\|SUV\|UTR/NEU\|SIN/PLU\|DEF\|NOM`, `JJ\|SUV\|UTR/NEU\|SIN/PLU\|IND\|NOM`, `KN`, `MAD`, `MID`, `NN`, `NN\|-\|-\|-\|-`, `NN\|AN`, `NN\|NEU\|-\|-\|SMS`, `NN\|NEU\|PLU\|DEF\|GEN`, `NN\|NEU\|PLU\|DEF\|NOM`, `NN\|NEU\|PLU\|IND\|GEN`, `NN\|NEU\|PLU\|IND\|NOM`, `NN\|NEU\|SIN\|DEF\|GEN`, `NN\|NEU\|SIN\|DEF\|NOM`, `NN\|NEU\|SIN\|IND`, `NN\|NEU\|SIN\|IND\|GEN`, `NN\|NEU\|SIN\|IND\|NOM`, `NN\|SMS`, `NN\|UTR\|-\|-\|-`, `NN\|UTR\|-\|-\|SMS`, `NN\|UTR\|PLU\|DEF\|GEN`, `NN\|UTR\|PLU\|DEF\|NOM`, `NN\|UTR\|PLU\|IND\|GEN`, `NN\|UTR\|PLU\|IND\|NOM`, `NN\|UTR\|SIN\|DEF\|GEN`, `NN\|UTR\|SIN\|DEF\|NOM`, `NN\|UTR\|SIN\|IND\|GEN`, `NN\|UTR\|SIN\|IND\|NOM`, `PAD`, `PC\|PRF\|NEU\|SIN\|IND\|NOM`, `PC\|PRF\|UTR/NEU\|PLU\|IND/DEF\|GEN`, `PC\|PRF\|UTR/NEU\|PLU\|IND/DEF\|NOM`, `PC\|PRF\|UTR/NEU\|SIN\|DEF\|NOM`, `PC\|PRF\|UTR\|SIN\|IND\|NOM`, `PC\|PRS\|UTR/NEU\|SIN/PLU\|IND/DEF\|NOM`, `PL`, `PM`, `PM\|GEN`, `PM\|NOM`, `PM\|SMS`, `PN\|MAS\|SIN\|DEF\|SUB/OBJ`, `PN\|NEU\|SIN\|DEF`, `PN\|NEU\|SIN\|DEF\|SUB/OBJ`, `PN\|NEU\|SIN\|IND\|SUB/OBJ`, `PN\|UTR/NEU\|PLU\|DEF\|OBJ`, `PN\|UTR/NEU\|PLU\|DEF\|SUB`, `PN\|UTR/NEU\|PLU\|DEF\|SUB/OBJ`, `PN\|UTR/NEU\|PLU\|IND\|SUB/OBJ`, `PN\|UTR/NEU\|SIN/PLU\|DEF\|OBJ`, `PN\|UTR\|PLU\|DEF\|OBJ`, `PN\|UTR\|PLU\|DEF\|SUB`, `PN\|UTR\|SIN\|DEF\|NOM`, `PN\|UTR\|SIN\|DEF\|OBJ`, `PN\|UTR\|SIN\|DEF\|SUB`, `PN\|UTR\|SIN\|DEF\|SUB/OBJ`, `PN\|UTR\|SIN\|IND\|NOM`, `PN\|UTR\|SIN\|IND\|SUB`, `PN\|UTR\|SIN\|IND\|SUB/OBJ`, `PP`, `PS\|NEU\|SIN\|DEF`, `PS\|UTR/NEU\|PLU\|DEF`, `PS\|UTR/NEU\|SIN/PLU\|DEF`, `PS\|UTR\|SIN\|DEF`, `RG\|NEU\|SIN\|IND\|NOM`, `RG\|NOM`, `RG\|SMS`, `RG\|UTR\|SIN\|IND\|NOM`, `RO\|MAS\|SIN\|IND/DEF\|NOM`, `RO\|NOM`, `SN`, `UO`, `VB\|AN`, `VB\|IMP\|AKT`, `VB\|IMP\|SFO`, `VB\|INF\|AKT`, `VB\|INF\|SFO`, `VB\|KON\|PRS\|AKT`, `VB\|KON\|PRT\|AKT`, `VB\|PRS\|AKT`, `VB\|PRS\|SFO`, `VB\|PRT\|AKT`, `VB\|PRT\|SFO`, `VB\|SUP\|AKT`, `VB\|SUP\|SFO`, `_SP` |
| **`morphologizer`** | `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `POS=ADP`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `POS=PUNCT`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Abbr=Yes\|POS=ADV`, `POS=SCONJ`, `POS=ADV`, `Case=Nom\|Definite=Ind\|Gender=Com\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PART`, `POS=VERB\|VerbForm=Inf`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=CCONJ`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PRON\|PronType=Rel`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Degree=Pos\|POS=ADV`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=VERB\|VerbForm=Sup\|Voice=Act`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=PART\|Polarity=Neg`, `Case=Nom\|Degree=Pos\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Degree=Sup\|POS=ADV`, `Case=Nom\|NumType=Card\|POS=NUM`, `Abbr=Yes\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Sup\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `POS=AUX\|VerbForm=Sup\|Voice=Act`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Rcp`, `POS=SPACE`, `POS=VERB\|VerbForm=Sup\|Voice=Pass`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|POS=ADJ`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|POS=DET\|PronType=Ind`, `Case=Nom\|Definite=Def\|Number=Sing\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Case=Nom\|POS=ADJ\|Tense=Pres\|VerbForm=Part`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Dem`, `Definite=Def\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `POS=NOUN`, `Case=Nom\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Tot`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Ind`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Gender=Com\|POS=NOUN`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Def\|POS=PRON\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Nom\|POS=PROPN`, `Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Plur\|POS=PRON\|PronType=Prs`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Definite=Def\|Gender=Com\|Number=Plur\|POS=PRON\|PronType=Prs`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Rel`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Int`, `Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|POS=PROPN`, `POS=PROPN`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Int`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Neut\|POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Int`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Neg`, `POS=VERB\|VerbForm=Sup`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|POS=ADJ`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=SYM`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Definite=Ind\|Degree=Sup\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Dem`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Neg`, `Mood=Sub\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Degree=Pos\|Gender=Com\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Ind\|POS=DET\|PronType=Prs`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Rel`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Definite=Def\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Ind`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Prs`, `Abbr=Yes\|POS=ADJ`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Rel`, `NumType=Card\|POS=NUM`, `POS=INTJ`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Int`, `Degree=Sup\|POS=ADV\|Polarity=Neg`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Int`, `POS=ADV\|Polarity=Neg`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Ind`, `POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Prs`, `Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Def\|POS=PRON\|PronType=Ind`, `Foreign=Yes\|POS=ADP`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Dem`, `Abbr=Yes\|Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Rel`, `Foreign=Yes\|POS=CCONJ`, `POS=DET\|PronType=Art`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Prs`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Tot`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Tot`, `Degree=Pos\|POS=ADV\|Polarity=Neg`, `Mood=Sub\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=PRON\|PronType=Ind`, `Definite=Ind\|POS=DET\|PronType=Neg`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Neg`, `POS=CCONJ\|Polarity=Neg`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Tot`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Tot`, `Mood=Imp\|POS=AUX\|VerbForm=Fin\|Voice=Act`, `Foreign=Yes\|POS=ADV`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Rcp`, `Case=Acc\|Definite=Def\|POS=PRON\|Polarity=Neg\|PronType=Ind` |
| **`parser`** | `ROOT`, `acl`, `acl:cleft`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `csubj`, `dep`, `det`, `dislocated`, `expl`, `fixed`, `flat:name`, `iobj`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `parataxis`, `punct`, `xcomp` |
| **`ner`** | `EVN`, `LOC`, `MSR`, `OBJ`, `ORG`, `PRS`, `TME`, `WRK` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.99 |
| `TOKEN_P` | 99.95 |
| `TOKEN_R` | 99.96 |
| `TOKEN_F` | 99.95 |
| `TAG_ACC` | 94.82 |
| `POS_ACC` | 96.06 |
| `MORPH_ACC` | 95.42 |
| `MORPH_MICRO_P` | 97.28 |
| `MORPH_MICRO_R` | 97.17 |
| `MORPH_MICRO_F` | 97.23 |
| `SENTS_P` | 89.35 |
| `SENTS_R` | 93.25 |
| `SENTS_F` | 91.26 |
| `DEP_UAS` | 83.40 |
| `DEP_LAS` | 78.49 |
| `LEMMA_ACC` | 95.57 |
| `ENTS_P` | 85.17 |
| `ENTS_R` | 74.60 |
| `ENTS_F` | 79.53 | |
Czapla/Rick | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- spacy
- token-classification
language:
- sv
license: cc-by-sa-4.0
model-index:
- name: sv_core_news_lg
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8602032409
- name: NER Recall
type: recall
value: 0.7620437956
- name: NER F Score
type: f_score
value: 0.8081537866
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9509033378
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9637644177
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9584566704
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9555986526
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.8362874455
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.7879268362
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9364613881
---
### Details: https://spacy.io/models/sv#sv_core_news_lg
Swedish pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner.
| Feature | Description |
| --- | --- |
| **Name** | `sv_core_news_lg` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | floret (200000, 300) |
| **Sources** | [UD Swedish Talbanken v2.8](https://github.com/UniversalDependencies/UD_Swedish-Talbanken) (Nivre, Joakim; Smith, Aaron)<br />[Stockholm-Umeå Corpus (SUC) v3.0](https://huggingface.co/datasets/KBLab/sucx3_ner) (Språkbanken)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (381 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `AB`, `AB\|AN`, `AB\|KOM`, `AB\|POS`, `AB\|SMS`, `AB\|SUV`, `DT\|NEU\|SIN\|DEF`, `DT\|NEU\|SIN\|IND`, `DT\|NEU\|SIN\|IND/DEF`, `DT\|UTR/NEU\|PLU\|DEF`, `DT\|UTR/NEU\|PLU\|IND`, `DT\|UTR/NEU\|PLU\|IND/DEF`, `DT\|UTR/NEU\|SIN/PLU\|IND`, `DT\|UTR/NEU\|SIN\|DEF`, `DT\|UTR/NEU\|SIN\|IND`, `DT\|UTR\|SIN\|DEF`, `DT\|UTR\|SIN\|IND`, `DT\|UTR\|SIN\|IND/DEF`, `HA`, `HD\|NEU\|SIN\|IND`, `HD\|UTR/NEU\|PLU\|IND`, `HD\|UTR\|SIN\|IND`, `HP\|-\|-\|-`, `HP\|NEU\|SIN\|IND`, `HP\|UTR/NEU\|PLU\|IND`, `HP\|UTR\|SIN\|IND`, `HS\|DEF`, `IE`, `IN`, `JJ`, `JJ\|AN`, `JJ\|KOM\|UTR/NEU\|SIN/PLU\|IND/DEF\|NOM`, `JJ\|POS\|MAS\|SIN\|DEF\|GEN`, `JJ\|POS\|MAS\|SIN\|DEF\|NOM`, `JJ\|POS\|NEU\|SIN\|IND/DEF\|NOM`, `JJ\|POS\|NEU\|SIN\|IND\|NOM`, `JJ\|POS\|UTR/NEU\|PLU\|IND/DEF\|GEN`, `JJ\|POS\|UTR/NEU\|PLU\|IND/DEF\|NOM`, `JJ\|POS\|UTR/NEU\|PLU\|IND\|NOM`, `JJ\|POS\|UTR/NEU\|SIN/PLU\|IND/DEF\|NOM`, `JJ\|POS\|UTR/NEU\|SIN\|DEF\|NOM`, `JJ\|POS\|UTR\|-\|-\|SMS`, `JJ\|POS\|UTR\|SIN\|IND/DEF\|NOM`, `JJ\|POS\|UTR\|SIN\|IND\|GEN`, `JJ\|POS\|UTR\|SIN\|IND\|NOM`, `JJ\|SUV\|MAS\|SIN\|DEF\|NOM`, `JJ\|SUV\|UTR/NEU\|PLU\|DEF\|NOM`, `JJ\|SUV\|UTR/NEU\|SIN/PLU\|DEF\|NOM`, `JJ\|SUV\|UTR/NEU\|SIN/PLU\|IND\|NOM`, `KN`, `MAD`, `MID`, `NN`, `NN\|-\|-\|-\|-`, `NN\|AN`, `NN\|NEU\|-\|-\|SMS`, `NN\|NEU\|PLU\|DEF\|GEN`, `NN\|NEU\|PLU\|DEF\|NOM`, `NN\|NEU\|PLU\|IND\|GEN`, `NN\|NEU\|PLU\|IND\|NOM`, `NN\|NEU\|SIN\|DEF\|GEN`, `NN\|NEU\|SIN\|DEF\|NOM`, `NN\|NEU\|SIN\|IND`, `NN\|NEU\|SIN\|IND\|GEN`, `NN\|NEU\|SIN\|IND\|NOM`, `NN\|SMS`, `NN\|UTR\|-\|-\|-`, `NN\|UTR\|-\|-\|SMS`, `NN\|UTR\|PLU\|DEF\|GEN`, `NN\|UTR\|PLU\|DEF\|NOM`, `NN\|UTR\|PLU\|IND\|GEN`, `NN\|UTR\|PLU\|IND\|NOM`, `NN\|UTR\|SIN\|DEF\|GEN`, `NN\|UTR\|SIN\|DEF\|NOM`, `NN\|UTR\|SIN\|IND\|GEN`, `NN\|UTR\|SIN\|IND\|NOM`, `PAD`, `PC\|PRF\|NEU\|SIN\|IND\|NOM`, `PC\|PRF\|UTR/NEU\|PLU\|IND/DEF\|GEN`, `PC\|PRF\|UTR/NEU\|PLU\|IND/DEF\|NOM`, `PC\|PRF\|UTR/NEU\|SIN\|DEF\|NOM`, `PC\|PRF\|UTR\|SIN\|IND\|NOM`, `PC\|PRS\|UTR/NEU\|SIN/PLU\|IND/DEF\|NOM`, `PL`, `PM`, `PM\|GEN`, `PM\|NOM`, `PM\|SMS`, `PN\|MAS\|SIN\|DEF\|SUB/OBJ`, `PN\|NEU\|SIN\|DEF`, `PN\|NEU\|SIN\|DEF\|SUB/OBJ`, `PN\|NEU\|SIN\|IND\|SUB/OBJ`, `PN\|UTR/NEU\|PLU\|DEF\|OBJ`, `PN\|UTR/NEU\|PLU\|DEF\|SUB`, `PN\|UTR/NEU\|PLU\|DEF\|SUB/OBJ`, `PN\|UTR/NEU\|PLU\|IND\|SUB/OBJ`, `PN\|UTR/NEU\|SIN/PLU\|DEF\|OBJ`, `PN\|UTR\|PLU\|DEF\|OBJ`, `PN\|UTR\|PLU\|DEF\|SUB`, `PN\|UTR\|SIN\|DEF\|NOM`, `PN\|UTR\|SIN\|DEF\|OBJ`, `PN\|UTR\|SIN\|DEF\|SUB`, `PN\|UTR\|SIN\|DEF\|SUB/OBJ`, `PN\|UTR\|SIN\|IND\|NOM`, `PN\|UTR\|SIN\|IND\|SUB`, `PN\|UTR\|SIN\|IND\|SUB/OBJ`, `PP`, `PS\|NEU\|SIN\|DEF`, `PS\|UTR/NEU\|PLU\|DEF`, `PS\|UTR/NEU\|SIN/PLU\|DEF`, `PS\|UTR\|SIN\|DEF`, `RG\|NEU\|SIN\|IND\|NOM`, `RG\|NOM`, `RG\|SMS`, `RG\|UTR\|SIN\|IND\|NOM`, `RO\|MAS\|SIN\|IND/DEF\|NOM`, `RO\|NOM`, `SN`, `UO`, `VB\|AN`, `VB\|IMP\|AKT`, `VB\|IMP\|SFO`, `VB\|INF\|AKT`, `VB\|INF\|SFO`, `VB\|KON\|PRS\|AKT`, `VB\|KON\|PRT\|AKT`, `VB\|PRS\|AKT`, `VB\|PRS\|SFO`, `VB\|PRT\|AKT`, `VB\|PRT\|SFO`, `VB\|SUP\|AKT`, `VB\|SUP\|SFO`, `_SP` |
| **`morphologizer`** | `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `POS=ADP`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `POS=PUNCT`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Abbr=Yes\|POS=ADV`, `POS=SCONJ`, `POS=ADV`, `Case=Nom\|Definite=Ind\|Gender=Com\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PART`, `POS=VERB\|VerbForm=Inf`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=CCONJ`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PRON\|PronType=Rel`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Degree=Pos\|POS=ADV`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=VERB\|VerbForm=Sup\|Voice=Act`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=PART\|Polarity=Neg`, `Case=Nom\|Degree=Pos\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Degree=Sup\|POS=ADV`, `Case=Nom\|NumType=Card\|POS=NUM`, `Abbr=Yes\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Sup\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `POS=AUX\|VerbForm=Sup\|Voice=Act`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Rcp`, `POS=SPACE`, `POS=VERB\|VerbForm=Sup\|Voice=Pass`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|POS=ADJ`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|POS=DET\|PronType=Ind`, `Case=Nom\|Definite=Def\|Number=Sing\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Case=Nom\|POS=ADJ\|Tense=Pres\|VerbForm=Part`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Dem`, `Definite=Def\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `POS=NOUN`, `Case=Nom\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Tot`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Ind`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Gender=Com\|POS=NOUN`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Def\|POS=PRON\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Nom\|POS=PROPN`, `Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Plur\|POS=PRON\|PronType=Prs`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Definite=Def\|Gender=Com\|Number=Plur\|POS=PRON\|PronType=Prs`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Rel`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Int`, `Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|POS=PROPN`, `POS=PROPN`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Int`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Neut\|POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Int`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Neg`, `POS=VERB\|VerbForm=Sup`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|POS=ADJ`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=SYM`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Definite=Ind\|Degree=Sup\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Dem`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Neg`, `Mood=Sub\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Degree=Pos\|Gender=Com\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Ind\|POS=DET\|PronType=Prs`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Rel`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Definite=Def\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Ind`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Prs`, `Abbr=Yes\|POS=ADJ`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Rel`, `NumType=Card\|POS=NUM`, `POS=INTJ`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Int`, `Degree=Sup\|POS=ADV\|Polarity=Neg`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Int`, `POS=ADV\|Polarity=Neg`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Ind`, `POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Prs`, `Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Def\|POS=PRON\|PronType=Ind`, `Foreign=Yes\|POS=ADP`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Dem`, `Abbr=Yes\|Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Rel`, `Foreign=Yes\|POS=CCONJ`, `POS=DET\|PronType=Art`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Prs`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Tot`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Tot`, `Degree=Pos\|POS=ADV\|Polarity=Neg`, `Mood=Sub\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=PRON\|PronType=Ind`, `Definite=Ind\|POS=DET\|PronType=Neg`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Neg`, `POS=CCONJ\|Polarity=Neg`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Tot`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Tot`, `Mood=Imp\|POS=AUX\|VerbForm=Fin\|Voice=Act`, `Foreign=Yes\|POS=ADV`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Rcp`, `Case=Acc\|Definite=Def\|POS=PRON\|Polarity=Neg\|PronType=Ind` |
| **`parser`** | `ROOT`, `acl`, `acl:cleft`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `csubj`, `dep`, `det`, `dislocated`, `expl`, `fixed`, `flat:name`, `iobj`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `parataxis`, `punct`, `xcomp` |
| **`ner`** | `EVN`, `LOC`, `MSR`, `OBJ`, `ORG`, `PRS`, `TME`, `WRK` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.99 |
| `TOKEN_P` | 99.95 |
| `TOKEN_R` | 99.96 |
| `TOKEN_F` | 99.95 |
| `TAG_ACC` | 95.09 |
| `POS_ACC` | 96.38 |
| `MORPH_ACC` | 95.85 |
| `MORPH_MICRO_P` | 97.77 |
| `MORPH_MICRO_R` | 97.39 |
| `MORPH_MICRO_F` | 97.58 |
| `SENTS_P` | 92.29 |
| `SENTS_R` | 95.04 |
| `SENTS_F` | 93.65 |
| `DEP_UAS` | 83.63 |
| `DEP_LAS` | 78.79 |
| `LEMMA_ACC` | 95.56 |
| `ENTS_P` | 86.02 |
| `ENTS_R` | 76.20 |
| `ENTS_F` | 80.82 | |
D3vil/DialoGPT-smaall-harrypotter | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- spacy
- token-classification
language:
- ko
license: cc-by-sa-4.0
model-index:
- name: ko_core_news_sm
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.7704418068
- name: NER Recall
type: recall
value: 0.6603320381
- name: NER F Score
type: f_score
value: 0.7111499981
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.7305919816
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.8582222398
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.8356969086
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.7360798556
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.6558677391
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.999274135
---
### Details: https://spacy.io/models/ko#ko_core_news_sm
Korean pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner.
| Feature | Description |
| --- | --- |
| **Name** | `ko_core_news_sm` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [UD Korean Kaist v2.8](https://github.com/UniversalDependencies/UD_Korean-Kaist) (Choi, Jinho; Han, Na-Rae; Hwang, Jena; Chun, Jayeol)<br />[KLUE v1.1.0](https://github.com/KLUE-benchmark/KLUE) (Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Jiyoon Han, Jangwon Park, Chisung Song, Junseong Kim, Youngsook Song, Taehwan Oh, Joohong Lee, Juhyun Oh, Sungwon Ryu, Younghoon Jeong, Inkwon Lee, Sangwoo Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa Lee, Seongbo Jang, Seungwon Do, Sunkyoung Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park, Jamin Shin, Seonghyun Kim, Lucy Park, Alice Oh, Jung-Woo Ha, Kyunghyun Cho) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (2028 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `_SP`, `ecs`, `etm`, `f`, `f+f+jcj`, `f+f+jcs`, `f+f+jct`, `f+f+jxt`, `f+jca`, `f+jca+jp+ecc`, `f+jca+jp+ep+ef`, `f+jca+jxc`, `f+jca+jxc+jcm`, `f+jca+jxt`, `f+jcj`, `f+jcm`, `f+jco`, `f+jcs`, `f+jct`, `f+jct+jcm`, `f+jp+ef`, `f+jp+ep+ef`, `f+jp+etm`, `f+jxc`, `f+jxt`, `f+ncn`, `f+ncn+jcm`, `f+ncn+jcs`, `f+ncn+jp+ecc`, `f+ncn+jxt`, `f+ncpa+jcm`, `f+npp+jcs`, `f+nq`, `f+xsn`, `f+xsn+jco`, `f+xsn+jxt`, `ii`, `jca`, `jca+jcm`, `jca+jxc`, `jca+jxt`, `jcc`, `jcj`, `jcm`, `jco`, `jcr`, `jcr+jxc`, `jcs`, `jct`, `jct+jcm`, `jct+jxt`, `jp+ecc`, `jp+ecs`, `jp+ef`, `jp+ef+jcr`, `jp+ef+jcr+jxc`, `jp+ep+ecs`, `jp+ep+ef`, `jp+ep+etm`, `jp+ep+etn`, `jp+etm`, `jp+etn`, `jp+etn+jco`, `jp+etn+jxc`, `jxc`, `jxc+jca`, `jxc+jco`, `jxc+jcs`, `jxt`, `mad`, `mad+jxc`, `mad+jxt`, `mag`, `mag+jca`, `mag+jcm`, `mag+jcs`, `mag+jp+ef+jcr`, `mag+jxc`, `mag+jxc+jxc`, `mag+jxt`, `mag+xsn`, `maj`, `maj+jxc`, `maj+jxt`, `mma`, `mmd`, `nbn`, `nbn+jca`, `nbn+jca+jcj`, `nbn+jca+jcm`, `nbn+jca+jp+ef`, `nbn+jca+jxc`, `nbn+jca+jxt`, `nbn+jcc`, `nbn+jcj`, `nbn+jcm`, `nbn+jco`, `nbn+jcr`, `nbn+jcs`, `nbn+jct`, `nbn+jct+jcm`, `nbn+jct+jxt`, `nbn+jp+ecc`, `nbn+jp+ecs`, `nbn+jp+ecs+jca`, `nbn+jp+ecs+jcm`, `nbn+jp+ecs+jco`, `nbn+jp+ecs+jxc`, `nbn+jp+ecs+jxt`, `nbn+jp+ecx`, `nbn+jp+ef`, `nbn+jp+ef+jca`, `nbn+jp+ef+jco`, `nbn+jp+ef+jcr`, `nbn+jp+ef+jcr+jxc`, `nbn+jp+ef+jcr+jxt`, `nbn+jp+ef+jcs`, `nbn+jp+ef+jxc`, `nbn+jp+ef+jxc+jco`, `nbn+jp+ef+jxf`, `nbn+jp+ef+jxt`, `nbn+jp+ep+ecc`, `nbn+jp+ep+ecs`, `nbn+jp+ep+ecs+jxc`, `nbn+jp+ep+ef`, `nbn+jp+ep+ef+jcr`, `nbn+jp+ep+etm`, `nbn+jp+ep+etn`, `nbn+jp+ep+etn+jco`, `nbn+jp+ep+etn+jcs`, `nbn+jp+etm`, `nbn+jp+etn`, `nbn+jp+etn+jca`, `nbn+jp+etn+jca+jxt`, `nbn+jp+etn+jco`, `nbn+jp+etn+jcs`, `nbn+jp+etn+jxc`, `nbn+jp+etn+jxt`, `nbn+jxc`, `nbn+jxc+jca`, `nbn+jxc+jca+jxc`, `nbn+jxc+jca+jxt`, `nbn+jxc+jcc`, `nbn+jxc+jcm`, `nbn+jxc+jco`, `nbn+jxc+jcs`, `nbn+jxc+jp+ef`, `nbn+jxc+jxc`, `nbn+jxc+jxt`, `nbn+jxt`, `nbn+nbn`, `nbn+nbn+jp+ef`, `nbn+xsm+ecs`, `nbn+xsm+ef`, `nbn+xsm+ep+ef`, `nbn+xsm+ep+ef+jcr`, `nbn+xsm+etm`, `nbn+xsn`, `nbn+xsn+jca`, `nbn+xsn+jca+jp+ef+jcr`, `nbn+xsn+jca+jxc`, `nbn+xsn+jca+jxt`, `nbn+xsn+jcm`, `nbn+xsn+jco`, `nbn+xsn+jcs`, `nbn+xsn+jct`, `nbn+xsn+jp+ecc`, `nbn+xsn+jp+ecs`, `nbn+xsn+jp+ef`, `nbn+xsn+jp+ef+jcr`, `nbn+xsn+jp+ep+ef`, `nbn+xsn+jxc`, `nbn+xsn+jxt`, `nbn+xsv+etm`, `nbu`, `nbu+jca`, `nbu+jca+jxc`, `nbu+jca+jxt`, `nbu+jcc`, `nbu+jcc+jxc`, `nbu+jcj`, `nbu+jcm`, `nbu+jco`, `nbu+jcs`, `nbu+jct`, `nbu+jct+jxc`, `nbu+jp+ecc`, `nbu+jp+ecs`, `nbu+jp+ef`, `nbu+jp+ef+jcr`, `nbu+jp+ef+jxc`, `nbu+jp+ep+ecc`, `nbu+jp+ep+ecs`, `nbu+jp+ep+ef`, `nbu+jp+ep+ef+jcr`, `nbu+jp+ep+etm`, `nbu+jp+ep+etn+jco`, `nbu+jp+etm`, `nbu+jxc`, `nbu+jxc+jca`, `nbu+jxc+jcs`, `nbu+jxc+jp+ef`, `nbu+jxc+jp+ep+ef`, `nbu+jxc+jxt`, `nbu+jxt`, `nbu+ncn`, `nbu+ncn+jca`, `nbu+ncn+jcm`, `nbu+xsn`, `nbu+xsn+jca`, `nbu+xsn+jca+jxc`, `nbu+xsn+jca+jxt`, `nbu+xsn+jcm`, `nbu+xsn+jco`, `nbu+xsn+jcs`, `nbu+xsn+jp+ecs`, `nbu+xsn+jp+ep+ef`, `nbu+xsn+jxc`, `nbu+xsn+jxc+jxt`, `nbu+xsn+jxt`, `nbu+xsv+ecc`, `nbu+xsv+etm`, `ncn`, `ncn+f+ncpa+jco`, `ncn+jca`, `ncn+jca+jca`, `ncn+jca+jcc`, `ncn+jca+jcj`, `ncn+jca+jcm`, `ncn+jca+jcs`, `ncn+jca+jct`, `ncn+jca+jp+ecc`, `ncn+jca+jp+ecs`, `ncn+jca+jp+ef`, `ncn+jca+jp+ep+ef`, `ncn+jca+jp+etm`, `ncn+jca+jp+etn+jxt`, `ncn+jca+jxc`, `ncn+jca+jxc+jcc`, `ncn+jca+jxc+jcm`, `ncn+jca+jxc+jxc`, `ncn+jca+jxc+jxt`, `ncn+jca+jxt`, `ncn+jcc`, `ncn+jcc+jxc`, `ncn+jcj`, `ncn+jcj+jxt`, `ncn+jcm`, `ncn+jco`, `ncn+jcr`, `ncn+jcr+jxc`, `ncn+jcs`, `ncn+jcs+jxt`, `ncn+jct`, `ncn+jct+jcm`, `ncn+jct+jxc`, `ncn+jct+jxt`, `ncn+jcv`, `ncn+jp+ecc`, `ncn+jp+ecc+jct`, `ncn+jp+ecc+jxc`, `ncn+jp+ecs`, `ncn+jp+ecs+jcm`, `ncn+jp+ecs+jco`, `ncn+jp+ecs+jxc`, `ncn+jp+ecs+jxt`, `ncn+jp+ecx`, `ncn+jp+ef`, `ncn+jp+ef+jca`, `ncn+jp+ef+jcm`, `ncn+jp+ef+jco`, `ncn+jp+ef+jcr`, `ncn+jp+ef+jcr+jxc`, `ncn+jp+ef+jcr+jxt`, `ncn+jp+ef+jp+etm`, `ncn+jp+ef+jxc`, `ncn+jp+ef+jxf`, `ncn+jp+ef+jxt`, `ncn+jp+ep+ecc`, `ncn+jp+ep+ecs`, `ncn+jp+ep+ecs+jxc`, `ncn+jp+ep+ecx`, `ncn+jp+ep+ef`, `ncn+jp+ep+ef+jcr`, `ncn+jp+ep+ef+jcr+jxc`, `ncn+jp+ep+ef+jxc`, `ncn+jp+ep+ef+jxf`, `ncn+jp+ep+ef+jxt`, `ncn+jp+ep+ep+etm`, `ncn+jp+ep+etm`, `ncn+jp+ep+etn`, `ncn+jp+ep+etn+jca`, `ncn+jp+ep+etn+jca+jxc`, `ncn+jp+ep+etn+jco`, `ncn+jp+ep+etn+jcs`, `ncn+jp+ep+etn+jxt`, `ncn+jp+etm`, `ncn+jp+etn`, `ncn+jp+etn+jca`, `ncn+jp+etn+jca+jxc`, `ncn+jp+etn+jca+jxt`, `ncn+jp+etn+jco`, `ncn+jp+etn+jcs`, `ncn+jp+etn+jct`, `ncn+jp+etn+jxc`, `ncn+jp+etn+jxt`, `ncn+jxc`, `ncn+jxc+jca`, `ncn+jxc+jca+jxc`, `ncn+jxc+jca+jxt`, `ncn+jxc+jcc`, `ncn+jxc+jcm`, `ncn+jxc+jco`, `ncn+jxc+jcs`, `ncn+jxc+jct+jxt`, `ncn+jxc+jp+ef`, `ncn+jxc+jp+ef+jcr`, `ncn+jxc+jp+ep+ecs`, `ncn+jxc+jp+ep+ef`, `ncn+jxc+jp+etm`, `ncn+jxc+jxc`, `ncn+jxc+jxt`, `ncn+jxt`, `ncn+jxt+jcm`, `ncn+jxt+jxc`, `ncn+nbn`, `ncn+nbn+jca`, `ncn+nbn+jcm`, `ncn+nbn+jcs`, `ncn+nbn+jp+ecc`, `ncn+nbn+jp+ep+ef`, `ncn+nbn+jxc`, `ncn+nbn+jxt`, `ncn+nbu`, `ncn+nbu+jca`, `ncn+nbu+jcm`, `ncn+nbu+jco`, `ncn+nbu+jp+ef`, `ncn+nbu+jxc`, `ncn+nbu+ncn`, `ncn+ncn`, `ncn+ncn+jca`, `ncn+ncn+jca+jcc`, `ncn+ncn+jca+jcm`, `ncn+ncn+jca+jxc`, `ncn+ncn+jca+jxc+jcm`, `ncn+ncn+jca+jxc+jxc`, `ncn+ncn+jca+jxt`, `ncn+ncn+jcc`, `ncn+ncn+jcj`, `ncn+ncn+jcm`, `ncn+ncn+jco`, `ncn+ncn+jcr`, `ncn+ncn+jcs`, `ncn+ncn+jct`, `ncn+ncn+jct+jcm`, `ncn+ncn+jct+jxc`, `ncn+ncn+jct+jxt`, `ncn+ncn+jp+ecc`, `ncn+ncn+jp+ecs`, `ncn+ncn+jp+ef`, `ncn+ncn+jp+ef+jcm`, `ncn+ncn+jp+ef+jcr`, `ncn+ncn+jp+ef+jcs`, `ncn+ncn+jp+ep+ecc`, `ncn+ncn+jp+ep+ecs`, `ncn+ncn+jp+ep+ef`, `ncn+ncn+jp+ep+ef+jcr`, `ncn+ncn+jp+ep+ep+etm`, `ncn+ncn+jp+ep+etm`, `ncn+ncn+jp+ep+etn`, `ncn+ncn+jp+etm`, `ncn+ncn+jp+etn`, `ncn+ncn+jp+etn+jca`, `ncn+ncn+jp+etn+jco`, `ncn+ncn+jp+etn+jxc`, `ncn+ncn+jxc`, `ncn+ncn+jxc+jca`, `ncn+ncn+jxc+jcc`, `ncn+ncn+jxc+jcm`, `ncn+ncn+jxc+jco`, `ncn+ncn+jxc+jcs`, `ncn+ncn+jxc+jxc`, `ncn+ncn+jxt`, `ncn+ncn+nbn`, `ncn+ncn+ncn`, `ncn+ncn+ncn+jca`, `ncn+ncn+ncn+jca+jcm`, `ncn+ncn+ncn+jca+jxt`, `ncn+ncn+ncn+jcj`, `ncn+ncn+ncn+jcm`, `ncn+ncn+ncn+jco`, `ncn+ncn+ncn+jcs`, `ncn+ncn+ncn+jct+jxt`, `ncn+ncn+ncn+jp+etn+jxc`, `ncn+ncn+ncn+jxt`, `ncn+ncn+ncn+ncn+jca`, `ncn+ncn+ncn+ncn+jca+jxt`, `ncn+ncn+ncn+ncn+jco`, `ncn+ncn+ncn+xsn+jp+etm`, `ncn+ncn+ncpa`, `ncn+ncn+ncpa+jca`, `ncn+ncn+ncpa+jcm`, `ncn+ncn+ncpa+jco`, `ncn+ncn+ncpa+jcs`, `ncn+ncn+ncpa+jxc`, `ncn+ncn+ncpa+jxt`, `ncn+ncn+ncpa+ncn`, `ncn+ncn+ncpa+ncn+jca`, `ncn+ncn+ncpa+ncn+jcj`, `ncn+ncn+ncpa+ncn+jcm`, `ncn+ncn+ncpa+ncn+jxt`, `ncn+ncn+xsn`, `ncn+ncn+xsn+jca`, `ncn+ncn+xsn+jca+jxt`, `ncn+ncn+xsn+jcj`, `ncn+ncn+xsn+jcm`, `ncn+ncn+xsn+jco`, `ncn+ncn+xsn+jcs`, `ncn+ncn+xsn+jct`, `ncn+ncn+xsn+jp+ecs`, `ncn+ncn+xsn+jp+ep+ef`, `ncn+ncn+xsn+jp+etm`, `ncn+ncn+xsn+jxc`, `ncn+ncn+xsn+jxc+jcs`, `ncn+ncn+xsn+jxt`, `ncn+ncn+xsv+ecc`, `ncn+ncn+xsv+etm`, `ncn+ncpa`, `ncn+ncpa+jca`, `ncn+ncpa+jca+jcm`, `ncn+ncpa+jca+jxc`, `ncn+ncpa+jca+jxt`, `ncn+ncpa+jcc`, `ncn+ncpa+jcj`, `ncn+ncpa+jcm`, `ncn+ncpa+jco`, `ncn+ncpa+jcr`, `ncn+ncpa+jcs`, `ncn+ncpa+jct`, `ncn+ncpa+jct+jcm`, `ncn+ncpa+jct+jxt`, `ncn+ncpa+jp+ecc`, `ncn+ncpa+jp+ecc+jxc`, `ncn+ncpa+jp+ecs`, `ncn+ncpa+jp+ecs+jxc`, `ncn+ncpa+jp+ef`, `ncn+ncpa+jp+ef+jcr`, `ncn+ncpa+jp+ef+jcr+jxc`, `ncn+ncpa+jp+ep+ef`, `ncn+ncpa+jp+ep+etm`, `ncn+ncpa+jp+ep+etn`, `ncn+ncpa+jp+etm`, `ncn+ncpa+jxc`, `ncn+ncpa+jxc+jca+jxc`, `ncn+ncpa+jxc+jco`, `ncn+ncpa+jxc+jcs`, `ncn+ncpa+jxt`, `ncn+ncpa+nbn+jcs`, `ncn+ncpa+ncn`, `ncn+ncpa+ncn+jca`, `ncn+ncpa+ncn+jca+jcm`, `ncn+ncpa+ncn+jca+jxc`, `ncn+ncpa+ncn+jca+jxt`, `ncn+ncpa+ncn+jcj`, `ncn+ncpa+ncn+jcm`, `ncn+ncpa+ncn+jco`, `ncn+ncpa+ncn+jcs`, `ncn+ncpa+ncn+jct`, `ncn+ncpa+ncn+jct+jcm`, `ncn+ncpa+ncn+jp+ef+jcr`, `ncn+ncpa+ncn+jp+ep+etm`, `ncn+ncpa+ncn+jxc`, `ncn+ncpa+ncn+jxt`, `ncn+ncpa+ncn+xsn+jcm`, `ncn+ncpa+ncn+xsn+jxt`, `ncn+ncpa+ncpa`, `ncn+ncpa+ncpa+jca`, `ncn+ncpa+ncpa+jcj`, `ncn+ncpa+ncpa+jcm`, `ncn+ncpa+ncpa+jco`, `ncn+ncpa+ncpa+jcs`, `ncn+ncpa+ncpa+jp+ep+ef`, `ncn+ncpa+ncpa+jxt`, `ncn+ncpa+ncpa+ncn`, `ncn+ncpa+xsn`, `ncn+ncpa+xsn+jcm`, `ncn+ncpa+xsn+jco`, `ncn+ncpa+xsn+jcs`, `ncn+ncpa+xsn+jp+ecc`, `ncn+ncpa+xsn+jp+etm`, `ncn+ncpa+xsn+jxt`, `ncn+ncpa+xsv+ecc`, `ncn+ncpa+xsv+ecs`, `ncn+ncpa+xsv+ecx`, `ncn+ncpa+xsv+ecx+px+etm`, `ncn+ncpa+xsv+ef`, `ncn+ncpa+xsv+ef+jcm`, `ncn+ncpa+xsv+ef+jcr`, `ncn+ncpa+xsv+etm`, _(truncated: full list in pipeline meta)_ |
| **`morphologizer`** | `POS=CCONJ`, `POS=ADV`, `POS=SCONJ`, `POS=DET`, `POS=NOUN`, `POS=VERB`, `POS=ADJ`, `POS=PUNCT`, `POS=SPACE`, `POS=AUX`, `POS=PRON`, `POS=PROPN`, `POS=NUM`, `POS=INTJ`, `POS=PART`, `POS=X`, `POS=ADP`, `POS=SYM` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `dislocated`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `punct`, `xcomp` |
| **`ner`** | `DT`, `LC`, `OG`, `PS`, `QT`, `TI` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 100.00 |
| `TOKEN_P` | 100.00 |
| `TOKEN_R` | 100.00 |
| `TOKEN_F` | 100.00 |
| `TAG_ACC` | 73.06 |
| `POS_ACC` | 85.82 |
| `SENTS_P` | 99.90 |
| `SENTS_R` | 99.95 |
| `SENTS_F` | 99.93 |
| `DEP_UAS` | 73.61 |
| `DEP_LAS` | 65.59 |
| `LEMMA_ACC` | 83.57 |
| `ENTS_P` | 77.04 |
| `ENTS_R` | 66.03 |
| `ENTS_F` | 71.11 | |
D3vil/DialoGPT-smaall-harrypottery | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- spacy
- token-classification
language:
- ko
license: cc-by-sa-4.0
model-index:
- name: ko_core_news_md
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8497178497
- name: NER Recall
type: recall
value: 0.8084775698
- name: NER F Score
type: f_score
value: 0.8285848749
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.8351991772
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9458443768
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.8994244348
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.8389181
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.8087068889
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 1.0
---
### Details: https://spacy.io/models/ko#ko_core_news_md
Korean pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner.
| Feature | Description |
| --- | --- |
| **Name** | `ko_core_news_md` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | floret (50000, 300) |
| **Sources** | [UD Korean Kaist v2.8](https://github.com/UniversalDependencies/UD_Korean-Kaist) (Choi, Jinho; Han, Na-Rae; Hwang, Jena; Chun, Jayeol)<br />[KLUE v1.1.0](https://github.com/KLUE-benchmark/KLUE) (Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Jiyoon Han, Jangwon Park, Chisung Song, Junseong Kim, Youngsook Song, Taehwan Oh, Joohong Lee, Juhyun Oh, Sungwon Ryu, Younghoon Jeong, Inkwon Lee, Sangwoo Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa Lee, Seongbo Jang, Seungwon Do, Sunkyoung Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park, Jamin Shin, Seonghyun Kim, Lucy Park, Alice Oh, Jung-Woo Ha, Kyunghyun Cho)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (2028 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `_SP`, `ecs`, `etm`, `f`, `f+f+jcj`, `f+f+jcs`, `f+f+jct`, `f+f+jxt`, `f+jca`, `f+jca+jp+ecc`, `f+jca+jp+ep+ef`, `f+jca+jxc`, `f+jca+jxc+jcm`, `f+jca+jxt`, `f+jcj`, `f+jcm`, `f+jco`, `f+jcs`, `f+jct`, `f+jct+jcm`, `f+jp+ef`, `f+jp+ep+ef`, `f+jp+etm`, `f+jxc`, `f+jxt`, `f+ncn`, `f+ncn+jcm`, `f+ncn+jcs`, `f+ncn+jp+ecc`, `f+ncn+jxt`, `f+ncpa+jcm`, `f+npp+jcs`, `f+nq`, `f+xsn`, `f+xsn+jco`, `f+xsn+jxt`, `ii`, `jca`, `jca+jcm`, `jca+jxc`, `jca+jxt`, `jcc`, `jcj`, `jcm`, `jco`, `jcr`, `jcr+jxc`, `jcs`, `jct`, `jct+jcm`, `jct+jxt`, `jp+ecc`, `jp+ecs`, `jp+ef`, `jp+ef+jcr`, `jp+ef+jcr+jxc`, `jp+ep+ecs`, `jp+ep+ef`, `jp+ep+etm`, `jp+ep+etn`, `jp+etm`, `jp+etn`, `jp+etn+jco`, `jp+etn+jxc`, `jxc`, `jxc+jca`, `jxc+jco`, `jxc+jcs`, `jxt`, `mad`, `mad+jxc`, `mad+jxt`, `mag`, `mag+jca`, `mag+jcm`, `mag+jcs`, `mag+jp+ef+jcr`, `mag+jxc`, `mag+jxc+jxc`, `mag+jxt`, `mag+xsn`, `maj`, `maj+jxc`, `maj+jxt`, `mma`, `mmd`, `nbn`, `nbn+jca`, `nbn+jca+jcj`, `nbn+jca+jcm`, `nbn+jca+jp+ef`, `nbn+jca+jxc`, `nbn+jca+jxt`, `nbn+jcc`, `nbn+jcj`, `nbn+jcm`, `nbn+jco`, `nbn+jcr`, `nbn+jcs`, `nbn+jct`, `nbn+jct+jcm`, `nbn+jct+jxt`, `nbn+jp+ecc`, `nbn+jp+ecs`, `nbn+jp+ecs+jca`, `nbn+jp+ecs+jcm`, `nbn+jp+ecs+jco`, `nbn+jp+ecs+jxc`, `nbn+jp+ecs+jxt`, `nbn+jp+ecx`, `nbn+jp+ef`, `nbn+jp+ef+jca`, `nbn+jp+ef+jco`, `nbn+jp+ef+jcr`, `nbn+jp+ef+jcr+jxc`, `nbn+jp+ef+jcr+jxt`, `nbn+jp+ef+jcs`, `nbn+jp+ef+jxc`, `nbn+jp+ef+jxc+jco`, `nbn+jp+ef+jxf`, `nbn+jp+ef+jxt`, `nbn+jp+ep+ecc`, `nbn+jp+ep+ecs`, `nbn+jp+ep+ecs+jxc`, `nbn+jp+ep+ef`, `nbn+jp+ep+ef+jcr`, `nbn+jp+ep+etm`, `nbn+jp+ep+etn`, `nbn+jp+ep+etn+jco`, `nbn+jp+ep+etn+jcs`, `nbn+jp+etm`, `nbn+jp+etn`, `nbn+jp+etn+jca`, `nbn+jp+etn+jca+jxt`, `nbn+jp+etn+jco`, `nbn+jp+etn+jcs`, `nbn+jp+etn+jxc`, `nbn+jp+etn+jxt`, `nbn+jxc`, `nbn+jxc+jca`, `nbn+jxc+jca+jxc`, `nbn+jxc+jca+jxt`, `nbn+jxc+jcc`, `nbn+jxc+jcm`, `nbn+jxc+jco`, `nbn+jxc+jcs`, `nbn+jxc+jp+ef`, `nbn+jxc+jxc`, `nbn+jxc+jxt`, `nbn+jxt`, `nbn+nbn`, `nbn+nbn+jp+ef`, `nbn+xsm+ecs`, `nbn+xsm+ef`, `nbn+xsm+ep+ef`, `nbn+xsm+ep+ef+jcr`, `nbn+xsm+etm`, `nbn+xsn`, `nbn+xsn+jca`, `nbn+xsn+jca+jp+ef+jcr`, `nbn+xsn+jca+jxc`, `nbn+xsn+jca+jxt`, `nbn+xsn+jcm`, `nbn+xsn+jco`, `nbn+xsn+jcs`, `nbn+xsn+jct`, `nbn+xsn+jp+ecc`, `nbn+xsn+jp+ecs`, `nbn+xsn+jp+ef`, `nbn+xsn+jp+ef+jcr`, `nbn+xsn+jp+ep+ef`, `nbn+xsn+jxc`, `nbn+xsn+jxt`, `nbn+xsv+etm`, `nbu`, `nbu+jca`, `nbu+jca+jxc`, `nbu+jca+jxt`, `nbu+jcc`, `nbu+jcc+jxc`, `nbu+jcj`, `nbu+jcm`, `nbu+jco`, `nbu+jcs`, `nbu+jct`, `nbu+jct+jxc`, `nbu+jp+ecc`, `nbu+jp+ecs`, `nbu+jp+ef`, `nbu+jp+ef+jcr`, `nbu+jp+ef+jxc`, `nbu+jp+ep+ecc`, `nbu+jp+ep+ecs`, `nbu+jp+ep+ef`, `nbu+jp+ep+ef+jcr`, `nbu+jp+ep+etm`, `nbu+jp+ep+etn+jco`, `nbu+jp+etm`, `nbu+jxc`, `nbu+jxc+jca`, `nbu+jxc+jcs`, `nbu+jxc+jp+ef`, `nbu+jxc+jp+ep+ef`, `nbu+jxc+jxt`, `nbu+jxt`, `nbu+ncn`, `nbu+ncn+jca`, `nbu+ncn+jcm`, `nbu+xsn`, `nbu+xsn+jca`, `nbu+xsn+jca+jxc`, `nbu+xsn+jca+jxt`, `nbu+xsn+jcm`, `nbu+xsn+jco`, `nbu+xsn+jcs`, `nbu+xsn+jp+ecs`, `nbu+xsn+jp+ep+ef`, `nbu+xsn+jxc`, `nbu+xsn+jxc+jxt`, `nbu+xsn+jxt`, `nbu+xsv+ecc`, `nbu+xsv+etm`, `ncn`, `ncn+f+ncpa+jco`, `ncn+jca`, `ncn+jca+jca`, `ncn+jca+jcc`, `ncn+jca+jcj`, `ncn+jca+jcm`, `ncn+jca+jcs`, `ncn+jca+jct`, `ncn+jca+jp+ecc`, `ncn+jca+jp+ecs`, `ncn+jca+jp+ef`, `ncn+jca+jp+ep+ef`, `ncn+jca+jp+etm`, `ncn+jca+jp+etn+jxt`, `ncn+jca+jxc`, `ncn+jca+jxc+jcc`, `ncn+jca+jxc+jcm`, `ncn+jca+jxc+jxc`, `ncn+jca+jxc+jxt`, `ncn+jca+jxt`, `ncn+jcc`, `ncn+jcc+jxc`, `ncn+jcj`, `ncn+jcj+jxt`, `ncn+jcm`, `ncn+jco`, `ncn+jcr`, `ncn+jcr+jxc`, `ncn+jcs`, `ncn+jcs+jxt`, `ncn+jct`, `ncn+jct+jcm`, `ncn+jct+jxc`, `ncn+jct+jxt`, `ncn+jcv`, `ncn+jp+ecc`, `ncn+jp+ecc+jct`, `ncn+jp+ecc+jxc`, `ncn+jp+ecs`, `ncn+jp+ecs+jcm`, `ncn+jp+ecs+jco`, `ncn+jp+ecs+jxc`, `ncn+jp+ecs+jxt`, `ncn+jp+ecx`, `ncn+jp+ef`, `ncn+jp+ef+jca`, `ncn+jp+ef+jcm`, `ncn+jp+ef+jco`, `ncn+jp+ef+jcr`, `ncn+jp+ef+jcr+jxc`, `ncn+jp+ef+jcr+jxt`, `ncn+jp+ef+jp+etm`, `ncn+jp+ef+jxc`, `ncn+jp+ef+jxf`, `ncn+jp+ef+jxt`, `ncn+jp+ep+ecc`, `ncn+jp+ep+ecs`, `ncn+jp+ep+ecs+jxc`, `ncn+jp+ep+ecx`, `ncn+jp+ep+ef`, `ncn+jp+ep+ef+jcr`, `ncn+jp+ep+ef+jcr+jxc`, `ncn+jp+ep+ef+jxc`, `ncn+jp+ep+ef+jxf`, `ncn+jp+ep+ef+jxt`, `ncn+jp+ep+ep+etm`, `ncn+jp+ep+etm`, `ncn+jp+ep+etn`, `ncn+jp+ep+etn+jca`, `ncn+jp+ep+etn+jca+jxc`, `ncn+jp+ep+etn+jco`, `ncn+jp+ep+etn+jcs`, `ncn+jp+ep+etn+jxt`, `ncn+jp+etm`, `ncn+jp+etn`, `ncn+jp+etn+jca`, `ncn+jp+etn+jca+jxc`, `ncn+jp+etn+jca+jxt`, `ncn+jp+etn+jco`, `ncn+jp+etn+jcs`, `ncn+jp+etn+jct`, `ncn+jp+etn+jxc`, `ncn+jp+etn+jxt`, `ncn+jxc`, `ncn+jxc+jca`, `ncn+jxc+jca+jxc`, `ncn+jxc+jca+jxt`, `ncn+jxc+jcc`, `ncn+jxc+jcm`, `ncn+jxc+jco`, `ncn+jxc+jcs`, `ncn+jxc+jct+jxt`, `ncn+jxc+jp+ef`, `ncn+jxc+jp+ef+jcr`, `ncn+jxc+jp+ep+ecs`, `ncn+jxc+jp+ep+ef`, `ncn+jxc+jp+etm`, `ncn+jxc+jxc`, `ncn+jxc+jxt`, `ncn+jxt`, `ncn+jxt+jcm`, `ncn+jxt+jxc`, `ncn+nbn`, `ncn+nbn+jca`, `ncn+nbn+jcm`, `ncn+nbn+jcs`, `ncn+nbn+jp+ecc`, `ncn+nbn+jp+ep+ef`, `ncn+nbn+jxc`, `ncn+nbn+jxt`, `ncn+nbu`, `ncn+nbu+jca`, `ncn+nbu+jcm`, `ncn+nbu+jco`, `ncn+nbu+jp+ef`, `ncn+nbu+jxc`, `ncn+nbu+ncn`, `ncn+ncn`, `ncn+ncn+jca`, `ncn+ncn+jca+jcc`, `ncn+ncn+jca+jcm`, `ncn+ncn+jca+jxc`, `ncn+ncn+jca+jxc+jcm`, `ncn+ncn+jca+jxc+jxc`, `ncn+ncn+jca+jxt`, `ncn+ncn+jcc`, `ncn+ncn+jcj`, `ncn+ncn+jcm`, `ncn+ncn+jco`, `ncn+ncn+jcr`, `ncn+ncn+jcs`, `ncn+ncn+jct`, `ncn+ncn+jct+jcm`, `ncn+ncn+jct+jxc`, `ncn+ncn+jct+jxt`, `ncn+ncn+jp+ecc`, `ncn+ncn+jp+ecs`, `ncn+ncn+jp+ef`, `ncn+ncn+jp+ef+jcm`, `ncn+ncn+jp+ef+jcr`, `ncn+ncn+jp+ef+jcs`, `ncn+ncn+jp+ep+ecc`, `ncn+ncn+jp+ep+ecs`, `ncn+ncn+jp+ep+ef`, `ncn+ncn+jp+ep+ef+jcr`, `ncn+ncn+jp+ep+ep+etm`, `ncn+ncn+jp+ep+etm`, `ncn+ncn+jp+ep+etn`, `ncn+ncn+jp+etm`, `ncn+ncn+jp+etn`, `ncn+ncn+jp+etn+jca`, `ncn+ncn+jp+etn+jco`, `ncn+ncn+jp+etn+jxc`, `ncn+ncn+jxc`, `ncn+ncn+jxc+jca`, `ncn+ncn+jxc+jcc`, `ncn+ncn+jxc+jcm`, `ncn+ncn+jxc+jco`, `ncn+ncn+jxc+jcs`, `ncn+ncn+jxc+jxc`, `ncn+ncn+jxt`, `ncn+ncn+nbn`, `ncn+ncn+ncn`, `ncn+ncn+ncn+jca`, `ncn+ncn+ncn+jca+jcm`, `ncn+ncn+ncn+jca+jxt`, `ncn+ncn+ncn+jcj`, `ncn+ncn+ncn+jcm`, `ncn+ncn+ncn+jco`, `ncn+ncn+ncn+jcs`, `ncn+ncn+ncn+jct+jxt`, `ncn+ncn+ncn+jp+etn+jxc`, `ncn+ncn+ncn+jxt`, `ncn+ncn+ncn+ncn+jca`, `ncn+ncn+ncn+ncn+jca+jxt`, `ncn+ncn+ncn+ncn+jco`, `ncn+ncn+ncn+xsn+jp+etm`, `ncn+ncn+ncpa`, `ncn+ncn+ncpa+jca`, `ncn+ncn+ncpa+jcm`, `ncn+ncn+ncpa+jco`, `ncn+ncn+ncpa+jcs`, `ncn+ncn+ncpa+jxc`, `ncn+ncn+ncpa+jxt`, `ncn+ncn+ncpa+ncn`, `ncn+ncn+ncpa+ncn+jca`, `ncn+ncn+ncpa+ncn+jcj`, `ncn+ncn+ncpa+ncn+jcm`, `ncn+ncn+ncpa+ncn+jxt`, `ncn+ncn+xsn`, `ncn+ncn+xsn+jca`, `ncn+ncn+xsn+jca+jxt`, `ncn+ncn+xsn+jcj`, `ncn+ncn+xsn+jcm`, `ncn+ncn+xsn+jco`, `ncn+ncn+xsn+jcs`, `ncn+ncn+xsn+jct`, `ncn+ncn+xsn+jp+ecs`, `ncn+ncn+xsn+jp+ep+ef`, `ncn+ncn+xsn+jp+etm`, `ncn+ncn+xsn+jxc`, `ncn+ncn+xsn+jxc+jcs`, `ncn+ncn+xsn+jxt`, `ncn+ncn+xsv+ecc`, `ncn+ncn+xsv+etm`, `ncn+ncpa`, `ncn+ncpa+jca`, `ncn+ncpa+jca+jcm`, `ncn+ncpa+jca+jxc`, `ncn+ncpa+jca+jxt`, `ncn+ncpa+jcc`, `ncn+ncpa+jcj`, `ncn+ncpa+jcm`, `ncn+ncpa+jco`, `ncn+ncpa+jcr`, `ncn+ncpa+jcs`, `ncn+ncpa+jct`, `ncn+ncpa+jct+jcm`, `ncn+ncpa+jct+jxt`, `ncn+ncpa+jp+ecc`, `ncn+ncpa+jp+ecc+jxc`, `ncn+ncpa+jp+ecs`, `ncn+ncpa+jp+ecs+jxc`, `ncn+ncpa+jp+ef`, `ncn+ncpa+jp+ef+jcr`, `ncn+ncpa+jp+ef+jcr+jxc`, `ncn+ncpa+jp+ep+ef`, `ncn+ncpa+jp+ep+etm`, `ncn+ncpa+jp+ep+etn`, `ncn+ncpa+jp+etm`, `ncn+ncpa+jxc`, `ncn+ncpa+jxc+jca+jxc`, `ncn+ncpa+jxc+jco`, `ncn+ncpa+jxc+jcs`, `ncn+ncpa+jxt`, `ncn+ncpa+nbn+jcs`, `ncn+ncpa+ncn`, `ncn+ncpa+ncn+jca`, `ncn+ncpa+ncn+jca+jcm`, `ncn+ncpa+ncn+jca+jxc`, `ncn+ncpa+ncn+jca+jxt`, `ncn+ncpa+ncn+jcj`, `ncn+ncpa+ncn+jcm`, `ncn+ncpa+ncn+jco`, `ncn+ncpa+ncn+jcs`, `ncn+ncpa+ncn+jct`, `ncn+ncpa+ncn+jct+jcm`, `ncn+ncpa+ncn+jp+ef+jcr`, `ncn+ncpa+ncn+jp+ep+etm`, `ncn+ncpa+ncn+jxc`, `ncn+ncpa+ncn+jxt`, `ncn+ncpa+ncn+xsn+jcm`, `ncn+ncpa+ncn+xsn+jxt`, `ncn+ncpa+ncpa`, `ncn+ncpa+ncpa+jca`, `ncn+ncpa+ncpa+jcj`, `ncn+ncpa+ncpa+jcm`, `ncn+ncpa+ncpa+jco`, `ncn+ncpa+ncpa+jcs`, `ncn+ncpa+ncpa+jp+ep+ef`, `ncn+ncpa+ncpa+jxt`, `ncn+ncpa+ncpa+ncn`, `ncn+ncpa+xsn`, `ncn+ncpa+xsn+jcm`, `ncn+ncpa+xsn+jco`, `ncn+ncpa+xsn+jcs`, `ncn+ncpa+xsn+jp+ecc`, `ncn+ncpa+xsn+jp+etm`, `ncn+ncpa+xsn+jxt`, `ncn+ncpa+xsv+ecc`, `ncn+ncpa+xsv+ecs`, `ncn+ncpa+xsv+ecx`, `ncn+ncpa+xsv+ecx+px+etm`, `ncn+ncpa+xsv+ef`, `ncn+ncpa+xsv+ef+jcm`, `ncn+ncpa+xsv+ef+jcr`, `ncn+ncpa+xsv+etm`, _(truncated: full list in pipeline meta)_ |
| **`morphologizer`** | `POS=CCONJ`, `POS=ADV`, `POS=SCONJ`, `POS=DET`, `POS=NOUN`, `POS=VERB`, `POS=ADJ`, `POS=PUNCT`, `POS=SPACE`, `POS=AUX`, `POS=PRON`, `POS=PROPN`, `POS=NUM`, `POS=INTJ`, `POS=PART`, `POS=X`, `POS=ADP`, `POS=SYM` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `dislocated`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `punct`, `xcomp` |
| **`ner`** | `DT`, `LC`, `OG`, `PS`, `QT`, `TI` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 100.00 |
| `TOKEN_P` | 100.00 |
| `TOKEN_R` | 100.00 |
| `TOKEN_F` | 100.00 |
| `TAG_ACC` | 83.52 |
| `POS_ACC` | 94.58 |
| `SENTS_P` | 100.00 |
| `SENTS_R` | 100.00 |
| `SENTS_F` | 100.00 |
| `DEP_UAS` | 83.89 |
| `DEP_LAS` | 80.87 |
| `LEMMA_ACC` | 89.94 |
| `ENTS_P` | 84.97 |
| `ENTS_R` | 80.85 |
| `ENTS_F` | 82.86 | |
D3xter1922/distilbert-base-uncased-finetuned-cola | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-05-02T08:19:03Z | ---
tags:
- spacy
- token-classification
language:
- ko
license: cc-by-sa-4.0
model-index:
- name: ko_core_news_lg
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8669446273
- name: NER Recall
type: recall
value: 0.837301307
- name: NER F Score
type: f_score
value: 0.8518651621
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.8400253175
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9487717077
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9009276291
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.8416620252
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.8140177338
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 1.0
---
### Details: https://spacy.io/models/ko#ko_core_news_lg
Korean pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner.
| Feature | Description |
| --- | --- |
| **Name** | `ko_core_news_lg` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | floret (200000, 300) |
| **Sources** | [UD Korean Kaist v2.8](https://github.com/UniversalDependencies/UD_Korean-Kaist) (Choi, Jinho; Han, Na-Rae; Hwang, Jena; Chun, Jayeol)<br />[KLUE v1.1.0](https://github.com/KLUE-benchmark/KLUE) (Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Jiyoon Han, Jangwon Park, Chisung Song, Junseong Kim, Youngsook Song, Taehwan Oh, Joohong Lee, Juhyun Oh, Sungwon Ryu, Younghoon Jeong, Inkwon Lee, Sangwoo Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa Lee, Seongbo Jang, Seungwon Do, Sunkyoung Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park, Jamin Shin, Seonghyun Kim, Lucy Park, Alice Oh, Jung-Woo Ha, Kyunghyun Cho)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (2028 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `_SP`, `ecs`, `etm`, `f`, `f+f+jcj`, `f+f+jcs`, `f+f+jct`, `f+f+jxt`, `f+jca`, `f+jca+jp+ecc`, `f+jca+jp+ep+ef`, `f+jca+jxc`, `f+jca+jxc+jcm`, `f+jca+jxt`, `f+jcj`, `f+jcm`, `f+jco`, `f+jcs`, `f+jct`, `f+jct+jcm`, `f+jp+ef`, `f+jp+ep+ef`, `f+jp+etm`, `f+jxc`, `f+jxt`, `f+ncn`, `f+ncn+jcm`, `f+ncn+jcs`, `f+ncn+jp+ecc`, `f+ncn+jxt`, `f+ncpa+jcm`, `f+npp+jcs`, `f+nq`, `f+xsn`, `f+xsn+jco`, `f+xsn+jxt`, `ii`, `jca`, `jca+jcm`, `jca+jxc`, `jca+jxt`, `jcc`, `jcj`, `jcm`, `jco`, `jcr`, `jcr+jxc`, `jcs`, `jct`, `jct+jcm`, `jct+jxt`, `jp+ecc`, `jp+ecs`, `jp+ef`, `jp+ef+jcr`, `jp+ef+jcr+jxc`, `jp+ep+ecs`, `jp+ep+ef`, `jp+ep+etm`, `jp+ep+etn`, `jp+etm`, `jp+etn`, `jp+etn+jco`, `jp+etn+jxc`, `jxc`, `jxc+jca`, `jxc+jco`, `jxc+jcs`, `jxt`, `mad`, `mad+jxc`, `mad+jxt`, `mag`, `mag+jca`, `mag+jcm`, `mag+jcs`, `mag+jp+ef+jcr`, `mag+jxc`, `mag+jxc+jxc`, `mag+jxt`, `mag+xsn`, `maj`, `maj+jxc`, `maj+jxt`, `mma`, `mmd`, `nbn`, `nbn+jca`, `nbn+jca+jcj`, `nbn+jca+jcm`, `nbn+jca+jp+ef`, `nbn+jca+jxc`, `nbn+jca+jxt`, `nbn+jcc`, `nbn+jcj`, `nbn+jcm`, `nbn+jco`, `nbn+jcr`, `nbn+jcs`, `nbn+jct`, `nbn+jct+jcm`, `nbn+jct+jxt`, `nbn+jp+ecc`, `nbn+jp+ecs`, `nbn+jp+ecs+jca`, `nbn+jp+ecs+jcm`, `nbn+jp+ecs+jco`, `nbn+jp+ecs+jxc`, `nbn+jp+ecs+jxt`, `nbn+jp+ecx`, `nbn+jp+ef`, `nbn+jp+ef+jca`, `nbn+jp+ef+jco`, `nbn+jp+ef+jcr`, `nbn+jp+ef+jcr+jxc`, `nbn+jp+ef+jcr+jxt`, `nbn+jp+ef+jcs`, `nbn+jp+ef+jxc`, `nbn+jp+ef+jxc+jco`, `nbn+jp+ef+jxf`, `nbn+jp+ef+jxt`, `nbn+jp+ep+ecc`, `nbn+jp+ep+ecs`, `nbn+jp+ep+ecs+jxc`, `nbn+jp+ep+ef`, `nbn+jp+ep+ef+jcr`, `nbn+jp+ep+etm`, `nbn+jp+ep+etn`, `nbn+jp+ep+etn+jco`, `nbn+jp+ep+etn+jcs`, `nbn+jp+etm`, `nbn+jp+etn`, `nbn+jp+etn+jca`, `nbn+jp+etn+jca+jxt`, `nbn+jp+etn+jco`, `nbn+jp+etn+jcs`, `nbn+jp+etn+jxc`, `nbn+jp+etn+jxt`, `nbn+jxc`, `nbn+jxc+jca`, `nbn+jxc+jca+jxc`, `nbn+jxc+jca+jxt`, `nbn+jxc+jcc`, `nbn+jxc+jcm`, `nbn+jxc+jco`, `nbn+jxc+jcs`, `nbn+jxc+jp+ef`, `nbn+jxc+jxc`, `nbn+jxc+jxt`, `nbn+jxt`, `nbn+nbn`, `nbn+nbn+jp+ef`, `nbn+xsm+ecs`, `nbn+xsm+ef`, `nbn+xsm+ep+ef`, `nbn+xsm+ep+ef+jcr`, `nbn+xsm+etm`, `nbn+xsn`, `nbn+xsn+jca`, `nbn+xsn+jca+jp+ef+jcr`, `nbn+xsn+jca+jxc`, `nbn+xsn+jca+jxt`, `nbn+xsn+jcm`, `nbn+xsn+jco`, `nbn+xsn+jcs`, `nbn+xsn+jct`, `nbn+xsn+jp+ecc`, `nbn+xsn+jp+ecs`, `nbn+xsn+jp+ef`, `nbn+xsn+jp+ef+jcr`, `nbn+xsn+jp+ep+ef`, `nbn+xsn+jxc`, `nbn+xsn+jxt`, `nbn+xsv+etm`, `nbu`, `nbu+jca`, `nbu+jca+jxc`, `nbu+jca+jxt`, `nbu+jcc`, `nbu+jcc+jxc`, `nbu+jcj`, `nbu+jcm`, `nbu+jco`, `nbu+jcs`, `nbu+jct`, `nbu+jct+jxc`, `nbu+jp+ecc`, `nbu+jp+ecs`, `nbu+jp+ef`, `nbu+jp+ef+jcr`, `nbu+jp+ef+jxc`, `nbu+jp+ep+ecc`, `nbu+jp+ep+ecs`, `nbu+jp+ep+ef`, `nbu+jp+ep+ef+jcr`, `nbu+jp+ep+etm`, `nbu+jp+ep+etn+jco`, `nbu+jp+etm`, `nbu+jxc`, `nbu+jxc+jca`, `nbu+jxc+jcs`, `nbu+jxc+jp+ef`, `nbu+jxc+jp+ep+ef`, `nbu+jxc+jxt`, `nbu+jxt`, `nbu+ncn`, `nbu+ncn+jca`, `nbu+ncn+jcm`, `nbu+xsn`, `nbu+xsn+jca`, `nbu+xsn+jca+jxc`, `nbu+xsn+jca+jxt`, `nbu+xsn+jcm`, `nbu+xsn+jco`, `nbu+xsn+jcs`, `nbu+xsn+jp+ecs`, `nbu+xsn+jp+ep+ef`, `nbu+xsn+jxc`, `nbu+xsn+jxc+jxt`, `nbu+xsn+jxt`, `nbu+xsv+ecc`, `nbu+xsv+etm`, `ncn`, `ncn+f+ncpa+jco`, `ncn+jca`, `ncn+jca+jca`, `ncn+jca+jcc`, `ncn+jca+jcj`, `ncn+jca+jcm`, `ncn+jca+jcs`, `ncn+jca+jct`, `ncn+jca+jp+ecc`, `ncn+jca+jp+ecs`, `ncn+jca+jp+ef`, `ncn+jca+jp+ep+ef`, `ncn+jca+jp+etm`, `ncn+jca+jp+etn+jxt`, `ncn+jca+jxc`, `ncn+jca+jxc+jcc`, `ncn+jca+jxc+jcm`, `ncn+jca+jxc+jxc`, `ncn+jca+jxc+jxt`, `ncn+jca+jxt`, `ncn+jcc`, `ncn+jcc+jxc`, `ncn+jcj`, `ncn+jcj+jxt`, `ncn+jcm`, `ncn+jco`, `ncn+jcr`, `ncn+jcr+jxc`, `ncn+jcs`, `ncn+jcs+jxt`, `ncn+jct`, `ncn+jct+jcm`, `ncn+jct+jxc`, `ncn+jct+jxt`, `ncn+jcv`, `ncn+jp+ecc`, `ncn+jp+ecc+jct`, `ncn+jp+ecc+jxc`, `ncn+jp+ecs`, `ncn+jp+ecs+jcm`, `ncn+jp+ecs+jco`, `ncn+jp+ecs+jxc`, `ncn+jp+ecs+jxt`, `ncn+jp+ecx`, `ncn+jp+ef`, `ncn+jp+ef+jca`, `ncn+jp+ef+jcm`, `ncn+jp+ef+jco`, `ncn+jp+ef+jcr`, `ncn+jp+ef+jcr+jxc`, `ncn+jp+ef+jcr+jxt`, `ncn+jp+ef+jp+etm`, `ncn+jp+ef+jxc`, `ncn+jp+ef+jxf`, `ncn+jp+ef+jxt`, `ncn+jp+ep+ecc`, `ncn+jp+ep+ecs`, `ncn+jp+ep+ecs+jxc`, `ncn+jp+ep+ecx`, `ncn+jp+ep+ef`, `ncn+jp+ep+ef+jcr`, `ncn+jp+ep+ef+jcr+jxc`, `ncn+jp+ep+ef+jxc`, `ncn+jp+ep+ef+jxf`, `ncn+jp+ep+ef+jxt`, `ncn+jp+ep+ep+etm`, `ncn+jp+ep+etm`, `ncn+jp+ep+etn`, `ncn+jp+ep+etn+jca`, `ncn+jp+ep+etn+jca+jxc`, `ncn+jp+ep+etn+jco`, `ncn+jp+ep+etn+jcs`, `ncn+jp+ep+etn+jxt`, `ncn+jp+etm`, `ncn+jp+etn`, `ncn+jp+etn+jca`, `ncn+jp+etn+jca+jxc`, `ncn+jp+etn+jca+jxt`, `ncn+jp+etn+jco`, `ncn+jp+etn+jcs`, `ncn+jp+etn+jct`, `ncn+jp+etn+jxc`, `ncn+jp+etn+jxt`, `ncn+jxc`, `ncn+jxc+jca`, `ncn+jxc+jca+jxc`, `ncn+jxc+jca+jxt`, `ncn+jxc+jcc`, `ncn+jxc+jcm`, `ncn+jxc+jco`, `ncn+jxc+jcs`, `ncn+jxc+jct+jxt`, `ncn+jxc+jp+ef`, `ncn+jxc+jp+ef+jcr`, `ncn+jxc+jp+ep+ecs`, `ncn+jxc+jp+ep+ef`, `ncn+jxc+jp+etm`, `ncn+jxc+jxc`, `ncn+jxc+jxt`, `ncn+jxt`, `ncn+jxt+jcm`, `ncn+jxt+jxc`, `ncn+nbn`, `ncn+nbn+jca`, `ncn+nbn+jcm`, `ncn+nbn+jcs`, `ncn+nbn+jp+ecc`, `ncn+nbn+jp+ep+ef`, `ncn+nbn+jxc`, `ncn+nbn+jxt`, `ncn+nbu`, `ncn+nbu+jca`, `ncn+nbu+jcm`, `ncn+nbu+jco`, `ncn+nbu+jp+ef`, `ncn+nbu+jxc`, `ncn+nbu+ncn`, `ncn+ncn`, `ncn+ncn+jca`, `ncn+ncn+jca+jcc`, `ncn+ncn+jca+jcm`, `ncn+ncn+jca+jxc`, `ncn+ncn+jca+jxc+jcm`, `ncn+ncn+jca+jxc+jxc`, `ncn+ncn+jca+jxt`, `ncn+ncn+jcc`, `ncn+ncn+jcj`, `ncn+ncn+jcm`, `ncn+ncn+jco`, `ncn+ncn+jcr`, `ncn+ncn+jcs`, `ncn+ncn+jct`, `ncn+ncn+jct+jcm`, `ncn+ncn+jct+jxc`, `ncn+ncn+jct+jxt`, `ncn+ncn+jp+ecc`, `ncn+ncn+jp+ecs`, `ncn+ncn+jp+ef`, `ncn+ncn+jp+ef+jcm`, `ncn+ncn+jp+ef+jcr`, `ncn+ncn+jp+ef+jcs`, `ncn+ncn+jp+ep+ecc`, `ncn+ncn+jp+ep+ecs`, `ncn+ncn+jp+ep+ef`, `ncn+ncn+jp+ep+ef+jcr`, `ncn+ncn+jp+ep+ep+etm`, `ncn+ncn+jp+ep+etm`, `ncn+ncn+jp+ep+etn`, `ncn+ncn+jp+etm`, `ncn+ncn+jp+etn`, `ncn+ncn+jp+etn+jca`, `ncn+ncn+jp+etn+jco`, `ncn+ncn+jp+etn+jxc`, `ncn+ncn+jxc`, `ncn+ncn+jxc+jca`, `ncn+ncn+jxc+jcc`, `ncn+ncn+jxc+jcm`, `ncn+ncn+jxc+jco`, `ncn+ncn+jxc+jcs`, `ncn+ncn+jxc+jxc`, `ncn+ncn+jxt`, `ncn+ncn+nbn`, `ncn+ncn+ncn`, `ncn+ncn+ncn+jca`, `ncn+ncn+ncn+jca+jcm`, `ncn+ncn+ncn+jca+jxt`, `ncn+ncn+ncn+jcj`, `ncn+ncn+ncn+jcm`, `ncn+ncn+ncn+jco`, `ncn+ncn+ncn+jcs`, `ncn+ncn+ncn+jct+jxt`, `ncn+ncn+ncn+jp+etn+jxc`, `ncn+ncn+ncn+jxt`, `ncn+ncn+ncn+ncn+jca`, `ncn+ncn+ncn+ncn+jca+jxt`, `ncn+ncn+ncn+ncn+jco`, `ncn+ncn+ncn+xsn+jp+etm`, `ncn+ncn+ncpa`, `ncn+ncn+ncpa+jca`, `ncn+ncn+ncpa+jcm`, `ncn+ncn+ncpa+jco`, `ncn+ncn+ncpa+jcs`, `ncn+ncn+ncpa+jxc`, `ncn+ncn+ncpa+jxt`, `ncn+ncn+ncpa+ncn`, `ncn+ncn+ncpa+ncn+jca`, `ncn+ncn+ncpa+ncn+jcj`, `ncn+ncn+ncpa+ncn+jcm`, `ncn+ncn+ncpa+ncn+jxt`, `ncn+ncn+xsn`, `ncn+ncn+xsn+jca`, `ncn+ncn+xsn+jca+jxt`, `ncn+ncn+xsn+jcj`, `ncn+ncn+xsn+jcm`, `ncn+ncn+xsn+jco`, `ncn+ncn+xsn+jcs`, `ncn+ncn+xsn+jct`, `ncn+ncn+xsn+jp+ecs`, `ncn+ncn+xsn+jp+ep+ef`, `ncn+ncn+xsn+jp+etm`, `ncn+ncn+xsn+jxc`, `ncn+ncn+xsn+jxc+jcs`, `ncn+ncn+xsn+jxt`, `ncn+ncn+xsv+ecc`, `ncn+ncn+xsv+etm`, `ncn+ncpa`, `ncn+ncpa+jca`, `ncn+ncpa+jca+jcm`, `ncn+ncpa+jca+jxc`, `ncn+ncpa+jca+jxt`, `ncn+ncpa+jcc`, `ncn+ncpa+jcj`, `ncn+ncpa+jcm`, `ncn+ncpa+jco`, `ncn+ncpa+jcr`, `ncn+ncpa+jcs`, `ncn+ncpa+jct`, `ncn+ncpa+jct+jcm`, `ncn+ncpa+jct+jxt`, `ncn+ncpa+jp+ecc`, `ncn+ncpa+jp+ecc+jxc`, `ncn+ncpa+jp+ecs`, `ncn+ncpa+jp+ecs+jxc`, `ncn+ncpa+jp+ef`, `ncn+ncpa+jp+ef+jcr`, `ncn+ncpa+jp+ef+jcr+jxc`, `ncn+ncpa+jp+ep+ef`, `ncn+ncpa+jp+ep+etm`, `ncn+ncpa+jp+ep+etn`, `ncn+ncpa+jp+etm`, `ncn+ncpa+jxc`, `ncn+ncpa+jxc+jca+jxc`, `ncn+ncpa+jxc+jco`, `ncn+ncpa+jxc+jcs`, `ncn+ncpa+jxt`, `ncn+ncpa+nbn+jcs`, `ncn+ncpa+ncn`, `ncn+ncpa+ncn+jca`, `ncn+ncpa+ncn+jca+jcm`, `ncn+ncpa+ncn+jca+jxc`, `ncn+ncpa+ncn+jca+jxt`, `ncn+ncpa+ncn+jcj`, `ncn+ncpa+ncn+jcm`, `ncn+ncpa+ncn+jco`, `ncn+ncpa+ncn+jcs`, `ncn+ncpa+ncn+jct`, `ncn+ncpa+ncn+jct+jcm`, `ncn+ncpa+ncn+jp+ef+jcr`, `ncn+ncpa+ncn+jp+ep+etm`, `ncn+ncpa+ncn+jxc`, `ncn+ncpa+ncn+jxt`, `ncn+ncpa+ncn+xsn+jcm`, `ncn+ncpa+ncn+xsn+jxt`, `ncn+ncpa+ncpa`, `ncn+ncpa+ncpa+jca`, `ncn+ncpa+ncpa+jcj`, `ncn+ncpa+ncpa+jcm`, `ncn+ncpa+ncpa+jco`, `ncn+ncpa+ncpa+jcs`, `ncn+ncpa+ncpa+jp+ep+ef`, `ncn+ncpa+ncpa+jxt`, `ncn+ncpa+ncpa+ncn`, `ncn+ncpa+xsn`, `ncn+ncpa+xsn+jcm`, `ncn+ncpa+xsn+jco`, `ncn+ncpa+xsn+jcs`, `ncn+ncpa+xsn+jp+ecc`, `ncn+ncpa+xsn+jp+etm`, `ncn+ncpa+xsn+jxt`, `ncn+ncpa+xsv+ecc`, `ncn+ncpa+xsv+ecs`, `ncn+ncpa+xsv+ecx`, `ncn+ncpa+xsv+ecx+px+etm`, `ncn+ncpa+xsv+ef`, `ncn+ncpa+xsv+ef+jcm`, `ncn+ncpa+xsv+ef+jcr`, `ncn+ncpa+xsv+etm`, _(truncated: full list in pipeline meta)_ |
| **`morphologizer`** | `POS=CCONJ`, `POS=ADV`, `POS=SCONJ`, `POS=DET`, `POS=NOUN`, `POS=VERB`, `POS=ADJ`, `POS=PUNCT`, `POS=SPACE`, `POS=AUX`, `POS=PRON`, `POS=PROPN`, `POS=NUM`, `POS=INTJ`, `POS=PART`, `POS=X`, `POS=ADP`, `POS=SYM` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `dislocated`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `punct`, `xcomp` |
| **`ner`** | `DT`, `LC`, `OG`, `PS`, `QT`, `TI` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 100.00 |
| `TOKEN_P` | 100.00 |
| `TOKEN_R` | 100.00 |
| `TOKEN_F` | 100.00 |
| `TAG_ACC` | 84.00 |
| `POS_ACC` | 94.88 |
| `SENTS_P` | 100.00 |
| `SENTS_R` | 100.00 |
| `SENTS_F` | 100.00 |
| `DEP_UAS` | 84.17 |
| `DEP_LAS` | 81.40 |
| `LEMMA_ACC` | 90.09 |
| `ENTS_P` | 86.69 |
| `ENTS_R` | 83.73 |
| `ENTS_F` | 85.19 | |
DARKVIP3R/DialoGPT-medium-Anakin | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: apache-2.0
---
**Exact Match** 83.19
**F1** 90.46
Checkout [linkbert-large-finetuned-squad](https://huggingface.co/niklaspm/linkbert-large-finetuned-squad) which achives F1:92.68 and EM:86.5
See [LinkBERT Paper](https://arxiv.org/abs/2203.15827) |
DCU-NLP/electra-base-irish-cased-generator-v1 | [
"pytorch",
"electra",
"fill-mask",
"ga",
"transformers",
"irish",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"ElectraForMaskedLM"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- spacy
- token-classification
language:
- fi
license: cc-by-sa-4.0
model-index:
- name: fi_core_news_sm
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.7942386831
- name: NER Recall
type: recall
value: 0.7396660279
- name: NER F Score
type: f_score
value: 0.7659815734
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9334610123
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9256949004
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.8656455142
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.8313131588
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.787412632
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.7167692749
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.8925925926
---
### Details: https://spacy.io/models/fi#fi_core_news_sm
Finnish pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner.
| Feature | Description |
| --- | --- |
| **Name** | `fi_core_news_sm` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [UD Finnish TDT v2.8](https://github.com/UniversalDependencies/UD_Finnish-TDT) (Ginter, Filip; Kanerva, Jenna; Laippala, Veronika; Miekka, Niko; Missilä, Anna; Ojala, Stina; Pyysalo, Sampo)<br />[TurkuONE (ffe2040e)](https://github.com/TurkuNLP/turku-one) (Jouni Luoma, Li-Hsin Chang, Filip Ginter, Sampo Pyysalo) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (2145 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `A`, `Adj`, `Adp`, `Adv`, `Adv_V`, `C`, `C_V`, `Foreign`, `Interj`, `N`, `Num`, `Pron`, `Punct`, `Symb`, `V`, `V_Pron`, `_SP` |
| **`morphologizer`** | `Case=Nom\|Number=Sing\|POS=NOUN`, `NumType=Ord\|POS=ADJ`, `Case=Ade\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=U\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADV`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=ADJ`, `POS=CCONJ`, `Case=Par\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Par\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Case=Nom\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `POS=SCONJ`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=NOUN`, `Case=Abl\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Clitic=Kaan\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Dem`, `Clitic=Kin\|POS=ADV`, `Case=Gen\|Number=Plur\|POS=PROPN`, `Case=Ess\|Number=Sing\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ela\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `POS=ADJ`, `Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ins\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=PROPN`, `Case=Par\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ill\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=All\|Derivation=U\|Number=Sing\|POS=NOUN`, `AdpType=Post\|POS=ADP`, `Case=Nom\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Abl\|Number=Sing\|POS=NOUN`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Ine\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Par\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Gen\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Par\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Tra\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Par\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `InfForm=1\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `InfForm=1\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Derivation=Sti\|POS=ADV`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Plur\|POS=NOUN`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ine\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=All\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ill\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Derivation=Ja\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=All\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=All\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Plur\|POS=NOUN`, `Case=Ela\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kaan\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Clitic=Ka\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Connegative=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Tra\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=0\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Clitic=Kin\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Derivation=Ja\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ine\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON`, `Case=Nom\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Clitic=Ko\|Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Case=Ade\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Derivation=Ttain\|POS=ADV`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|InfForm=2\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ela\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|Number=Plur\|POS=NOUN`, `Case=Com\|POS=NOUN\|Person[psor]=3`, `Case=Com\|POS=PRON\|Person[psor]=3\|PronType=Ind`, `Number[psor]=Sing\|POS=ADV\|Person[psor]=1`, `Case=Par\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Int`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `POS=SPACE`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Ade\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Connegative=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Ill\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=SCONJ\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `AdpType=Post\|POS=ADP\|Person[psor]=3`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ill\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Abbr=Yes\|Case=Ine\|Number=Sing\|POS=NOUN`, `Case=Ine\|InfForm=2\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Number=Plur\|POS=NOUN`, `Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Par\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=PROPN\|Style=Coll`, `Abbr=Yes\|Case=Par\|Number=Sing\|POS=NOUN`, `Case=Ess\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ess\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PROPN`, `Case=Par\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Ine\|InfForm=2\|Number=Sing\|POS=VERB\|Person[psor]=3\|VerbForm=Inf\|Voice=Act`, `NumType=Card\|POS=NUM`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ill\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ill\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|InfForm=2\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Ela\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Ade\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Ade\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ess\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Dem`, `Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Clitic=Ko\|Number=Sing\|POS=SCONJ\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|PronType=Dem`, `Connegative=Yes\|Mood=Cnd\|POS=AUX\|VerbForm=Fin`, `Case=Ela\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Par\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Derivation=Llinen,Vs\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Gen\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `POS=SYM`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Rel`, `Clitic=Ka\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=0\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Clitic=Kaan\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ess\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=SCONJ\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Clitic=Kaan\|POS=ADV`, `Clitic=Pa\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ade\|Derivation=U\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|POS=ADV`, `Case=Ine\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=All\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `POS=ADV\|Typo=Yes`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ela\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ela\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=All\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Ela\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=All\|Clitic=Kin\|Number=Sing\|POS=PROPN`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Derivation=Vs\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `AdpType=Prep\|POS=ADP`, `Case=Par\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Vs\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Style=Coll`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `POS=INTJ`, `Case=Nom\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ess\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `Case=Ine\|InfForm=3\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Ela\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ela\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Par\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ela\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Ela\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Abl\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Abl\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Tra\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Abe\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Tra\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Number=Sing\|POS=NOUN\|Person[psor]=3\|Typo=Yes`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=All\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Abl\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Derivation=Lainen\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Abbr=Yes\|Case=Nom\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Clitic=Kin\|Mood=Cnd\|POS=AUX\|VerbForm=Fin\|Voice=Pass`, `Clitic=Han\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Derivation=U\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Ind`, `Abbr=Yes\|Case=Gen\|Number=Sing\|POS=PROPN`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Clitic=Han\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Derivation=Sti\|POS=ADV\|Typo=Yes`, `Case=All\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=Tar\|Number=Sing\|POS=NOUN`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Derivation=Minen\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Clitic=Kin\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ill\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Par\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Ade\|Number=Sing\|POS=PROPN`, `Case=Nom\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ess\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Clitic=Ka\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=U\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=NOUN\|Style=Coll`, `Case=Ill\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Nom\|Clitic=Kaan\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Par\|Number=Sing\|POS=PROPN`, `Number=Sing\|POS=VERB\|Person=0\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Prs\|Style=Coll`, `Case=Ela\|Number=Sing\|POS=PROPN`, `Case=Nom\|Clitic=Pa\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=ADJ\|Typo=Yes`, `POS=ADV\|Style=Coll`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Tra\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Par\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=NOUN\|Style=Coll`, `Clitic=Han\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Dem\|Typo=Yes`, `Case=Ine\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Par\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Dem`, `Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|NumType=Card\|Number=Plur\|POS=NUM\|Typo=Yes`, `Clitic=Ko\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Case=Ill\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ela\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Abbr=Yes\|Case=Abl\|Number=Sing\|POS=PROPN`, `Case=Abl\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Case=Ade\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `Case=Ade\|Degree=Cmp\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Par\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|NumType=Card\|Number=Plur\|POS=NUM\|Typo=Yes`, `Case=Ess\|Number=Sing\|POS=PRON\|PronType=Dem`, `Clitic=Han\|POS=ADV`, `Case=Par\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Nom\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Derivation=Llinen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=NOUN`, `Case=Abl\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=All\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Par\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Par\|Derivation=Ton,Vs\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Ela\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=All\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Par\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Tra\|InfForm=1\|Number=Sing\|POS=VERB\|Person[psor]=3\|VerbForm=Inf\|Voice=Act`, `Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=All\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|Case=Ade\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Par\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=Nom\|Clitic=Kin\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Clitic=Kaan\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|InfForm=2\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Ill\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Nom\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=SCONJ\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Ill\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=All\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Derivation=U\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ess\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Gen\|Derivation=Minen\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Clitic=Ko\|Number=Sing\|POS=VERB\|Person=0\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|InfForm=3\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Clitic=Han\|Number=Sing\|POS=NOUN`, `Case=Ill\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Ess\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ela\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON\|Reflex=Yes`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Clitic=Kaan\|Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Degree=Sup\|Derivation=Sti\|POS=ADV`, `Case=Ine\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Tra\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Par\|Derivation=Inen,Vs\|Number=Plur\|POS=NOUN`, _(truncated: full list in pipeline meta)_ |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `cc:preconj`, `ccomp`, `compound`, `compound:nn`, `compound:prt`, `conj`, `cop`, `cop:own`, `csubj`, `csubj:cop`, `dep`, `det`, `discourse`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `mark`, `nmod`, `nmod:gobj`, `nmod:gsubj`, `nmod:poss`, `nsubj`, `nsubj:cop`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp`, `xcomp:ds` |
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 100.00 |
| `TOKEN_P` | 99.79 |
| `TOKEN_R` | 99.90 |
| `TOKEN_F` | 99.85 |
| `TAG_ACC` | 93.35 |
| `POS_ACC` | 92.57 |
| `MORPH_ACC` | 86.56 |
| `MORPH_MICRO_P` | 91.98 |
| `MORPH_MICRO_R` | 90.45 |
| `MORPH_MICRO_F` | 91.21 |
| `SENTS_P` | 90.19 |
| `SENTS_R` | 88.34 |
| `SENTS_F` | 89.26 |
| `DEP_UAS` | 78.74 |
| `DEP_LAS` | 71.68 |
| `LEMMA_ACC` | 83.13 |
| `ENTS_P` | 79.42 |
| `ENTS_R` | 73.97 |
| `ENTS_F` | 76.60 | |
DHBaek/gpt2-stackoverflow-question-contents-generator | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
tags:
- spacy
- token-classification
language:
- fi
license: cc-by-sa-4.0
model-index:
- name: fi_core_news_md
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8190770962
- name: NER Recall
type: recall
value: 0.7968792773
- name: NER F Score
type: f_score
value: 0.807825725
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9659361405
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9586650253
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9186882914
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.8602402419
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.8321792131
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.7845751467
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.8935543278
---
### Details: https://spacy.io/models/fi#fi_core_news_md
Finnish pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner.
| Feature | Description |
| --- | --- |
| **Name** | `fi_core_news_md` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | floret (50000, 300) |
| **Sources** | [UD Finnish TDT v2.8](https://github.com/UniversalDependencies/UD_Finnish-TDT) (Ginter, Filip; Kanerva, Jenna; Laippala, Veronika; Miekka, Niko; Missilä, Anna; Ojala, Stina; Pyysalo, Sampo)<br />[TurkuONE (ffe2040e)](https://github.com/TurkuNLP/turku-one) (Jouni Luoma, Li-Hsin Chang, Filip Ginter, Sampo Pyysalo)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (2145 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `A`, `Adj`, `Adp`, `Adv`, `Adv_V`, `C`, `C_V`, `Foreign`, `Interj`, `N`, `Num`, `Pron`, `Punct`, `Symb`, `V`, `V_Pron`, `_SP` |
| **`morphologizer`** | `Case=Nom\|Number=Sing\|POS=NOUN`, `NumType=Ord\|POS=ADJ`, `Case=Ade\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=U\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADV`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=ADJ`, `POS=CCONJ`, `Case=Par\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Par\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Case=Nom\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `POS=SCONJ`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=NOUN`, `Case=Abl\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Clitic=Kaan\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Dem`, `Clitic=Kin\|POS=ADV`, `Case=Gen\|Number=Plur\|POS=PROPN`, `Case=Ess\|Number=Sing\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ela\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `POS=ADJ`, `Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ins\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=PROPN`, `Case=Par\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ill\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=All\|Derivation=U\|Number=Sing\|POS=NOUN`, `AdpType=Post\|POS=ADP`, `Case=Nom\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Abl\|Number=Sing\|POS=NOUN`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Ine\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Par\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Gen\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Par\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Tra\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Par\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `InfForm=1\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `InfForm=1\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Derivation=Sti\|POS=ADV`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Plur\|POS=NOUN`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ine\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=All\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ill\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Derivation=Ja\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=All\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=All\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Plur\|POS=NOUN`, `Case=Ela\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kaan\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Clitic=Ka\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Connegative=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Tra\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=0\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Clitic=Kin\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Derivation=Ja\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ine\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON`, `Case=Nom\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Clitic=Ko\|Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Case=Ade\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Derivation=Ttain\|POS=ADV`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|InfForm=2\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ela\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|Number=Plur\|POS=NOUN`, `Case=Com\|POS=NOUN\|Person[psor]=3`, `Case=Com\|POS=PRON\|Person[psor]=3\|PronType=Ind`, `Number[psor]=Sing\|POS=ADV\|Person[psor]=1`, `Case=Par\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Int`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `POS=SPACE`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Ade\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Connegative=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Ill\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=SCONJ\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `AdpType=Post\|POS=ADP\|Person[psor]=3`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ill\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Abbr=Yes\|Case=Ine\|Number=Sing\|POS=NOUN`, `Case=Ine\|InfForm=2\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Number=Plur\|POS=NOUN`, `Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Par\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=PROPN\|Style=Coll`, `Abbr=Yes\|Case=Par\|Number=Sing\|POS=NOUN`, `Case=Ess\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ess\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PROPN`, `Case=Par\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Ine\|InfForm=2\|Number=Sing\|POS=VERB\|Person[psor]=3\|VerbForm=Inf\|Voice=Act`, `NumType=Card\|POS=NUM`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ill\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ill\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|InfForm=2\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Ela\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Ade\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Ade\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ess\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Dem`, `Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Clitic=Ko\|Number=Sing\|POS=SCONJ\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|PronType=Dem`, `Connegative=Yes\|Mood=Cnd\|POS=AUX\|VerbForm=Fin`, `Case=Ela\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Par\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Derivation=Llinen,Vs\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Gen\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `POS=SYM`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Rel`, `Clitic=Ka\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=0\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Clitic=Kaan\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ess\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=SCONJ\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Clitic=Kaan\|POS=ADV`, `Clitic=Pa\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ade\|Derivation=U\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|POS=ADV`, `Case=Ine\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=All\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `POS=ADV\|Typo=Yes`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ela\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ela\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=All\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Ela\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=All\|Clitic=Kin\|Number=Sing\|POS=PROPN`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Derivation=Vs\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `AdpType=Prep\|POS=ADP`, `Case=Par\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Vs\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Style=Coll`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `POS=INTJ`, `Case=Nom\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ess\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `Case=Ine\|InfForm=3\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Ela\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ela\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Par\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ela\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Ela\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Abl\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Abl\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Tra\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Abe\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Tra\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Number=Sing\|POS=NOUN\|Person[psor]=3\|Typo=Yes`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=All\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Abl\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Derivation=Lainen\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Abbr=Yes\|Case=Nom\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Clitic=Kin\|Mood=Cnd\|POS=AUX\|VerbForm=Fin\|Voice=Pass`, `Clitic=Han\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Derivation=U\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Ind`, `Abbr=Yes\|Case=Gen\|Number=Sing\|POS=PROPN`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Clitic=Han\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Derivation=Sti\|POS=ADV\|Typo=Yes`, `Case=All\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=Tar\|Number=Sing\|POS=NOUN`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Derivation=Minen\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Clitic=Kin\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ill\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Par\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Ade\|Number=Sing\|POS=PROPN`, `Case=Nom\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ess\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Clitic=Ka\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=U\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=NOUN\|Style=Coll`, `Case=Ill\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Nom\|Clitic=Kaan\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Par\|Number=Sing\|POS=PROPN`, `Number=Sing\|POS=VERB\|Person=0\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Prs\|Style=Coll`, `Case=Ela\|Number=Sing\|POS=PROPN`, `Case=Nom\|Clitic=Pa\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=ADJ\|Typo=Yes`, `POS=ADV\|Style=Coll`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Tra\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Par\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=NOUN\|Style=Coll`, `Clitic=Han\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Dem\|Typo=Yes`, `Case=Ine\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Par\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Dem`, `Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|NumType=Card\|Number=Plur\|POS=NUM\|Typo=Yes`, `Clitic=Ko\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Case=Ill\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ela\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Abbr=Yes\|Case=Abl\|Number=Sing\|POS=PROPN`, `Case=Abl\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Case=Ade\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `Case=Ade\|Degree=Cmp\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Par\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|NumType=Card\|Number=Plur\|POS=NUM\|Typo=Yes`, `Case=Ess\|Number=Sing\|POS=PRON\|PronType=Dem`, `Clitic=Han\|POS=ADV`, `Case=Par\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Nom\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Derivation=Llinen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=NOUN`, `Case=Abl\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=All\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Par\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Par\|Derivation=Ton,Vs\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Ela\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=All\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Par\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Tra\|InfForm=1\|Number=Sing\|POS=VERB\|Person[psor]=3\|VerbForm=Inf\|Voice=Act`, `Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=All\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|Case=Ade\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Par\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=Nom\|Clitic=Kin\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Clitic=Kaan\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|InfForm=2\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Ill\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Nom\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=SCONJ\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Ill\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=All\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Derivation=U\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ess\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Gen\|Derivation=Minen\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Clitic=Ko\|Number=Sing\|POS=VERB\|Person=0\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|InfForm=3\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Clitic=Han\|Number=Sing\|POS=NOUN`, `Case=Ill\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Ess\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ela\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON\|Reflex=Yes`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Clitic=Kaan\|Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Degree=Sup\|Derivation=Sti\|POS=ADV`, `Case=Ine\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Tra\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Par\|Derivation=Inen,Vs\|Number=Plur\|POS=NOUN`, _(truncated: full list in pipeline meta)_ |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `cc:preconj`, `ccomp`, `compound`, `compound:nn`, `compound:prt`, `conj`, `cop`, `cop:own`, `csubj`, `csubj:cop`, `dep`, `det`, `discourse`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `mark`, `nmod`, `nmod:gobj`, `nmod:gsubj`, `nmod:poss`, `nsubj`, `nsubj:cop`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp`, `xcomp:ds` |
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 100.00 |
| `TOKEN_P` | 99.79 |
| `TOKEN_R` | 99.90 |
| `TOKEN_F` | 99.85 |
| `TAG_ACC` | 96.59 |
| `POS_ACC` | 95.87 |
| `MORPH_ACC` | 91.87 |
| `MORPH_MICRO_P` | 95.90 |
| `MORPH_MICRO_R` | 94.93 |
| `MORPH_MICRO_F` | 95.41 |
| `SENTS_P` | 89.79 |
| `SENTS_R` | 88.93 |
| `SENTS_F` | 89.36 |
| `DEP_UAS` | 83.22 |
| `DEP_LAS` | 78.46 |
| `LEMMA_ACC` | 86.02 |
| `ENTS_P` | 81.91 |
| `ENTS_R` | 79.69 |
| `ENTS_F` | 80.78 | |
DHBaek/xlm-roberta-large-korquad-mask | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- spacy
- token-classification
language:
- fi
license: cc-by-sa-4.0
model-index:
- name: fi_core_news_lg
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8236272879
- name: NER Recall
type: recall
value: 0.813030386
- name: NER F Score
type: f_score
value: 0.8182945309
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9709439124
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9628474502
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9221890983
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.8653065672
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.8371365653
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.7941298453
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9083487941
---
### Details: https://spacy.io/models/fi#fi_core_news_lg
Finnish pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner.
| Feature | Description |
| --- | --- |
| **Name** | `fi_core_news_lg` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | floret (200000, 300) |
| **Sources** | [UD Finnish TDT v2.8](https://github.com/UniversalDependencies/UD_Finnish-TDT) (Ginter, Filip; Kanerva, Jenna; Laippala, Veronika; Miekka, Niko; Missilä, Anna; Ojala, Stina; Pyysalo, Sampo)<br />[TurkuONE (ffe2040e)](https://github.com/TurkuNLP/turku-one) (Jouni Luoma, Li-Hsin Chang, Filip Ginter, Sampo Pyysalo)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (2145 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `A`, `Adj`, `Adp`, `Adv`, `Adv_V`, `C`, `C_V`, `Foreign`, `Interj`, `N`, `Num`, `Pron`, `Punct`, `Symb`, `V`, `V_Pron`, `_SP` |
| **`morphologizer`** | `Case=Nom\|Number=Sing\|POS=NOUN`, `NumType=Ord\|POS=ADJ`, `Case=Ade\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=U\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADV`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=ADJ`, `POS=CCONJ`, `Case=Par\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Par\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Case=Nom\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `POS=SCONJ`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=NOUN`, `Case=Abl\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Clitic=Kaan\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Dem`, `Clitic=Kin\|POS=ADV`, `Case=Gen\|Number=Plur\|POS=PROPN`, `Case=Ess\|Number=Sing\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ela\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `POS=ADJ`, `Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ins\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=PROPN`, `Case=Par\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ill\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=All\|Derivation=U\|Number=Sing\|POS=NOUN`, `AdpType=Post\|POS=ADP`, `Case=Nom\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Abl\|Number=Sing\|POS=NOUN`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Ine\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Par\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Gen\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Par\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Tra\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Par\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `InfForm=1\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `InfForm=1\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Derivation=Sti\|POS=ADV`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Plur\|POS=NOUN`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ine\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=All\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ill\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Derivation=Ja\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=All\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=All\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Plur\|POS=NOUN`, `Case=Ela\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kaan\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Clitic=Ka\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Connegative=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Tra\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=0\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Clitic=Kin\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Derivation=Ja\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ine\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON`, `Case=Nom\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Clitic=Ko\|Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Case=Ade\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Derivation=Ttain\|POS=ADV`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|InfForm=2\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ela\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|Number=Plur\|POS=NOUN`, `Case=Com\|POS=NOUN\|Person[psor]=3`, `Case=Com\|POS=PRON\|Person[psor]=3\|PronType=Ind`, `Number[psor]=Sing\|POS=ADV\|Person[psor]=1`, `Case=Par\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Int`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `POS=SPACE`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Ade\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Connegative=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Ill\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=SCONJ\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `AdpType=Post\|POS=ADP\|Person[psor]=3`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ill\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Abbr=Yes\|Case=Ine\|Number=Sing\|POS=NOUN`, `Case=Ine\|InfForm=2\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Number=Plur\|POS=NOUN`, `Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Par\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=PROPN\|Style=Coll`, `Abbr=Yes\|Case=Par\|Number=Sing\|POS=NOUN`, `Case=Ess\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ess\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PROPN`, `Case=Par\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Ine\|InfForm=2\|Number=Sing\|POS=VERB\|Person[psor]=3\|VerbForm=Inf\|Voice=Act`, `NumType=Card\|POS=NUM`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ill\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ill\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|InfForm=2\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Ela\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Ade\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Ade\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ess\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Dem`, `Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Clitic=Ko\|Number=Sing\|POS=SCONJ\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|PronType=Dem`, `Connegative=Yes\|Mood=Cnd\|POS=AUX\|VerbForm=Fin`, `Case=Ela\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Par\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Derivation=Llinen,Vs\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Gen\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `POS=SYM`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Rel`, `Clitic=Ka\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=0\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Clitic=Kaan\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ess\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=SCONJ\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Clitic=Kaan\|POS=ADV`, `Clitic=Pa\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ade\|Derivation=U\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|POS=ADV`, `Case=Ine\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=All\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `POS=ADV\|Typo=Yes`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ela\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ela\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=All\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Ela\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=All\|Clitic=Kin\|Number=Sing\|POS=PROPN`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Derivation=Vs\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `AdpType=Prep\|POS=ADP`, `Case=Par\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Vs\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Style=Coll`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `POS=INTJ`, `Case=Nom\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ess\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `Case=Ine\|InfForm=3\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Ela\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ela\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Par\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ela\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Ela\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Abl\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Abl\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Tra\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Abe\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Tra\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Number=Sing\|POS=NOUN\|Person[psor]=3\|Typo=Yes`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=All\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Abl\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Derivation=Lainen\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Abbr=Yes\|Case=Nom\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Clitic=Kin\|Mood=Cnd\|POS=AUX\|VerbForm=Fin\|Voice=Pass`, `Clitic=Han\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Derivation=U\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Ind`, `Abbr=Yes\|Case=Gen\|Number=Sing\|POS=PROPN`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Clitic=Han\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Derivation=Sti\|POS=ADV\|Typo=Yes`, `Case=All\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=Tar\|Number=Sing\|POS=NOUN`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Derivation=Minen\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Clitic=Kin\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ill\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Par\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Ade\|Number=Sing\|POS=PROPN`, `Case=Nom\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ess\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Clitic=Ka\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=U\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=NOUN\|Style=Coll`, `Case=Ill\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Nom\|Clitic=Kaan\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Par\|Number=Sing\|POS=PROPN`, `Number=Sing\|POS=VERB\|Person=0\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Prs\|Style=Coll`, `Case=Ela\|Number=Sing\|POS=PROPN`, `Case=Nom\|Clitic=Pa\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=ADJ\|Typo=Yes`, `POS=ADV\|Style=Coll`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Tra\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Par\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=NOUN\|Style=Coll`, `Clitic=Han\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Dem\|Typo=Yes`, `Case=Ine\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Par\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Dem`, `Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|NumType=Card\|Number=Plur\|POS=NUM\|Typo=Yes`, `Clitic=Ko\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Case=Ill\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ela\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Abbr=Yes\|Case=Abl\|Number=Sing\|POS=PROPN`, `Case=Abl\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Case=Ade\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `Case=Ade\|Degree=Cmp\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Par\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|NumType=Card\|Number=Plur\|POS=NUM\|Typo=Yes`, `Case=Ess\|Number=Sing\|POS=PRON\|PronType=Dem`, `Clitic=Han\|POS=ADV`, `Case=Par\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Nom\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Derivation=Llinen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=NOUN`, `Case=Abl\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=All\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Par\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Par\|Derivation=Ton,Vs\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Ela\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=All\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Par\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Tra\|InfForm=1\|Number=Sing\|POS=VERB\|Person[psor]=3\|VerbForm=Inf\|Voice=Act`, `Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=All\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|Case=Ade\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Par\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=Nom\|Clitic=Kin\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Clitic=Kaan\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|InfForm=2\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Ill\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Nom\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=SCONJ\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Ill\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=All\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Derivation=U\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ess\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Gen\|Derivation=Minen\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Clitic=Ko\|Number=Sing\|POS=VERB\|Person=0\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|InfForm=3\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Clitic=Han\|Number=Sing\|POS=NOUN`, `Case=Ill\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Ess\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ela\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON\|Reflex=Yes`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Clitic=Kaan\|Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Degree=Sup\|Derivation=Sti\|POS=ADV`, `Case=Ine\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Tra\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Par\|Derivation=Inen,Vs\|Number=Plur\|POS=NOUN`, _(truncated: full list in pipeline meta)_ |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `cc:preconj`, `ccomp`, `compound`, `compound:nn`, `compound:prt`, `conj`, `cop`, `cop:own`, `csubj`, `csubj:cop`, `dep`, `det`, `discourse`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `mark`, `nmod`, `nmod:gobj`, `nmod:gsubj`, `nmod:poss`, `nsubj`, `nsubj:cop`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp`, `xcomp:ds` |
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 100.00 |
| `TOKEN_P` | 99.79 |
| `TOKEN_R` | 99.90 |
| `TOKEN_F` | 99.85 |
| `TAG_ACC` | 97.09 |
| `POS_ACC` | 96.28 |
| `MORPH_ACC` | 92.22 |
| `MORPH_MICRO_P` | 96.26 |
| `MORPH_MICRO_R` | 95.17 |
| `MORPH_MICRO_F` | 95.71 |
| `SENTS_P` | 91.96 |
| `SENTS_R` | 89.74 |
| `SENTS_F` | 90.83 |
| `DEP_UAS` | 83.71 |
| `DEP_LAS` | 79.41 |
| `LEMMA_ACC` | 86.53 |
| `ENTS_P` | 82.36 |
| `ENTS_R` | 81.30 |
| `ENTS_F` | 81.83 | |
DJSammy/bert-base-danish-uncased_BotXO-ai | [
"pytorch",
"jax",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"transformers",
"bert",
"masked-lm",
"license:cc-by-4.0",
"fill-mask"
] | fill-mask | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- tristantristantristan/autotrain-data-rumour_detection
co2_eq_emissions: 0.056186258092819436
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 813825547
- CO2 Emissions (in grams): 0.056186258092819436
## Validation Metrics
- Loss: 0.15057753026485443
- Accuracy: 0.9738805970149254
- Precision: 0.9469026548672567
- Recall: 0.9304347826086956
- AUC: 0.9891149437157905
- F1: 0.9385964912280702
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/tristantristantristan/autotrain-rumour_detection-813825547
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("tristantristantristan/autotrain-rumour_detection-813825547", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("tristantristantristan/autotrain-rumour_detection-813825547", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
DJSammy/bert-base-swedish-uncased_BotXO-ai | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` |
DKpro000/DialoGPT-medium-harrypotter | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
---
# Distilbert-base-uncased-emotion
## Model description:
[Distilbert](https://arxiv.org/abs/1910.01108) is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40%, while retaining 97% of its language understanding. It's smaller, faster than Bert and any other Bert-based model.
[Distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters
```
learning rate 2e-5,
batch size 64,
num_train_epochs=8,
```
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.0006792712374590337},
{'label': 'joy', 'score': 0.9959300756454468},
{'label': 'love', 'score': 0.0009452480007894337},
{'label': 'anger', 'score': 0.0018055217806249857},
{'label': 'fear', 'score': 0.00041110432357527316},
{'label': 'surprise', 'score': 0.0002288572577526793}
]]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
## Eval results
```json
{
'test_accuracy': 0.938,
'test_f1': 0.937932884041714,
'test_loss': 0.1472451239824295,
'test_mem_cpu_alloc_delta': 0,
'test_mem_cpu_peaked_delta': 0,
'test_mem_gpu_alloc_delta': 0,
'test_mem_gpu_peaked_delta': 163454464,
'test_runtime': 5.0164,
'test_samples_per_second': 398.69
}
```
## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/) |
DLNLP/t5-small-finetuned-xsum | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: afl-3.0
widget:
- text: "The case of a 72-year-old male with @DISEASE$ with poor insulin control (fasting hyperglycemia greater than 180 mg/dl) who had a long-standing polyuric syndrome is here presented. Hypernatremia and plasma osmolality elevated together with a low urinary osmolality led to the suspicion of diabetes insipidus, which was subsequently confirmed by the dehydration test and the administration of @GENE$ sc."
example_title: "Example 1"
- text: "Hypernatremia and plasma osmolality elevated together with a low urinary osmolality led to the suspicion of diabetes insipidus, which was subsequently confirmed by the dehydration test and the administration of @GENE$ sc. With 61% increase in the calculated urinary osmolarity one hour post desmopressin s.c., @DISEASE$ was diagnosed."
example_title: "Example 2"
---
The following is a fine-tuning of the BioBert models on the GAD dataset.
The model works by masking the gene string with "@GENE$" and the disease string with "@DISEASE$".
The output is a text classification that can either be:
- "LABEL0" if there is no relation
- "LABEL1" if there is a relation. |
DSI/TweetBasedSA | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sqac
model-index:
- name: roberta-base-bne-finetuned-sqac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-sqac
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the sqac dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0033 | 1.0 | 1196 | 0.8764 |
| 0.4659 | 2.0 | 2392 | 0.8998 |
| 0.152 | 3.0 | 3588 | 1.1857 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DSI/ar_emotion_6 | [
"pytorch",
"bert",
"transformers"
] | null | {
"architectures": [
"BertForMultiLabelSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8646864686468646
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3328
- Accuracy: 0.8633
- F1: 0.8647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DSI/human-directed-sentiment | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0337
- Accuracy: 0.7888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7451 | 1.0 | 4597 | 0.5944 | 0.7696 |
| 0.3709 | 2.0 | 9194 | 0.6454 | 0.7803 |
| 0.1444 | 3.0 | 13791 | 1.0337 | 0.7888 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DSI/personal_sentiment | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2107 | 1.0 | 5533 | 1.1478 |
| 0.949 | 2.0 | 11066 | 1.1191 |
| 0.7396 | 3.0 | 16599 | 1.1622 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support | [
"pytorch",
"jax",
"bert",
"text-classification",
"multilingual",
"nl",
"fr",
"en",
"arxiv:2104.09947",
"transformers",
"Tweets",
"Sentiment analysis"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-multilingual-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0252 | 1.0 | 3163 | 0.9733 |
| 0.7401 | 2.0 | 6326 | 0.9607 |
| 0.516 | 3.0 | 9489 | 1.0109 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DTAI-KULeuven/robbertje-1-gb-merged | [
"pytorch",
"roberta",
"fill-mask",
"nl",
"dataset:oscar",
"dataset:oscar (NL)",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"license:mit",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | # A fine-tuned GPT-Neo Model for Tweet Generation
This model is a fine-tuned version of the 1.3B-parameter GPT-Neo model developed by EleutherAI. As the default GPT-Neo model did not receive any social media data during its pre-training, we fine-tuned it with tweets collected from Twitter from October to November 2021 related to climate change hashtags. The model received data in the format `<username> - <tweet>` We used an 80/20 train/test split, and to differentiate distinct tweets, we added a start-of-tweet and an end-of-tweet token to the training dataset.
To guide you in using this model, please consult the `gpt_neo_1.3B_twitter.ipynb` Jupyter Notebook file from this repository.
---
license: cc-by-3.0
---
|
alexandrainst/da-binary-emotion-classification-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,066 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab971
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab971
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6551
- Wer: 0.4448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9461 | 1.77 | 500 | 3.2175 | 1.0 |
| 2.5387 | 3.53 | 1000 | 1.2239 | 0.7851 |
| 0.9632 | 5.3 | 1500 | 0.7275 | 0.6352 |
| 0.6585 | 7.07 | 2000 | 0.6218 | 0.5896 |
| 0.4875 | 8.83 | 2500 | 0.5670 | 0.5651 |
| 0.397 | 10.6 | 3000 | 0.5796 | 0.5487 |
| 0.3298 | 12.37 | 3500 | 0.5870 | 0.5322 |
| 0.2816 | 14.13 | 4000 | 0.5796 | 0.5016 |
| 0.2396 | 15.9 | 4500 | 0.5956 | 0.5040 |
| 0.2019 | 17.67 | 5000 | 0.5911 | 0.4847 |
| 0.1845 | 19.43 | 5500 | 0.6050 | 0.4800 |
| 0.1637 | 21.2 | 6000 | 0.6518 | 0.4927 |
| 0.1428 | 22.97 | 6500 | 0.6247 | 0.4645 |
| 0.1319 | 24.73 | 7000 | 0.6592 | 0.4711 |
| 0.1229 | 26.5 | 7500 | 0.6526 | 0.4556 |
| 0.1111 | 28.27 | 8000 | 0.6551 | 0.4448 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
alexandrainst/da-emotion-classification-base | [
"pytorch",
"tf",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 837 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab_2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3801
- Wer: 0.3035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.7227 | 3.52 | 500 | 2.6961 | 1.0 |
| 1.1237 | 7.04 | 1000 | 0.6088 | 0.5315 |
| 0.4886 | 10.56 | 1500 | 0.4709 | 0.4353 |
| 0.3148 | 14.08 | 2000 | 0.4341 | 0.3942 |
| 0.2229 | 17.61 | 2500 | 0.4035 | 0.3616 |
| 0.1693 | 21.13 | 3000 | 0.3868 | 0.3289 |
| 0.1393 | 24.65 | 3500 | 0.3993 | 0.3135 |
| 0.118 | 28.17 | 4000 | 0.3801 | 0.3035 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
alexandrainst/da-hatespeech-classification-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 866 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8758169934640523
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3149
- Accuracy: 0.8733
- F1: 0.8758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
alexandrainst/da-hatespeech-detection-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,719 | null | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` |
alexandrainst/da-sentiment-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"arxiv:1910.09700",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,432 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-newdata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-newdata
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0588
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0543 | 1.0 | 1116 | 0.0307 | 0.9911 |
| 0.0235 | 2.0 | 2232 | 0.0372 | 0.9911 |
| 0.0102 | 3.0 | 3348 | 0.0486 | 0.9914 |
| 0.0003 | 4.0 | 4464 | 0.0563 | 0.9914 |
| 0.0008 | 5.0 | 5580 | 0.0588 | 0.9911 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
alexandrainst/da-subjectivivity-classification-base | [
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"dataset:DDSC/twitter-sent",
"dataset:DDSC/europarl",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 846 | null | ---
language:
- nl
tags:
- punctuation prediction
- punctuation
datasets: sonar
license: mit
widget:
- text: "Ondanks dat het nu bijna voorjaar is hebben we nog steds best koude dagen"
example_title: "Dutch Sample"
metrics:
- f1
---
This model predicts the punctuation of Dutch texts. We developed it to restore the punctuation of transcribed spoken language.
This model was trained on the [SoNaR Dataset](http://hdl.handle.net/10032/tm-a2-h5).
The model restores the following punctuation markers: **"." "," "?" "-" ":"**
## Sample Code
We provide a simple python package that allows you to process text of any length.
## Install
To get started install the package from [pypi](https://pypi.org/project/deepmultilingualpunctuation/):
```bash
pip install deepmultilingualpunctuation
```
### Restore Punctuation
```python
from deepmultilingualpunctuation import PunctuationModel
model = PunctuationModel(model="oliverguhr/fullstop-dutch-sonar-punctuation-prediction")
text = "hervatting van de zitting ik verklaar de zitting van het europees parlement die op vrijdag 17 december werd onderbroken te zijn hervat"
result = model.restore_punctuation(text)
print(result)
```
**output**
> hervatting van de zitting. ik verklaar de zitting van het europees parlement, die op vrijdag 17 december werd onderbroken, te zijn hervat.
### Predict Labels
```python
from deepmultilingualpunctuation import PunctuationModel
model = PunctuationModel(model="oliverguhr/fullstop-dutch-sonar-punctuation-prediction")
text = "hervatting van de zitting ik verklaar de zitting van het europees parlement die op vrijdag 17 december werd onderbroken te zijn hervat"
clean_text = model.preprocess(text)
labled_words = model.predict(clean_text)
print(labled_words)
```
**output**
> [['hervatting', '0', 0.99998724], ['van', '0', 0.9999784], ['de', '0', 0.99991274], ['zitting', '.', 0.6771242], ['ik', '0', 0.9999466], ['verklaar', '0', 0.9998566], ['de', '0', 0.9999783], ['zitting', '0', 0.9999809], ['van', '0', 0.99996245], ['het', '0', 0.99997795], ['europees', '0', 0.9999783], ['parlement', ',', 0.9908242], ['die', '0', 0.999985], ['op', '0', 0.99998224], ['vrijdag', '0', 0.9999831], ['17', '0', 0.99997985], ['december', '0', 0.9999827], ['werd', '0', 0.999982], ['onderbroken', ',', 0.9951485], ['te', '0', 0.9999677], ['zijn', '0', 0.99997723], ['hervat', '.', 0.9957053]]
## Results
The performance differs for the single punctuation markers as hyphens and colons, in many cases, are optional and can be substituted by either a comma or a full stop. The model achieves the following F1 scores:
| Label | F1 Score |
| ------------- | -------- |
| 0 | 0.985816 |
| . | 0.854380 |
| ? | 0.684060 |
| , | 0.719308 |
| : | 0.696088 |
| - | 0.722000 |
| macro average | 0.776942 |
| micro average | 0.963427 |
## Languages
### Models
| Languages | Model |
| ------------------------------------------ | ------------------------------------------------------------ |
| English, Italian, French and German | [oliverguhr/fullstop-punctuation-multilang-large](https://huggingface.co/oliverguhr/fullstop-punctuation-multilang-large) |
| English, Italian, French, German and Dutch | [oliverguhr/fullstop-punctuation-multilingual-sonar-base](https://huggingface.co/oliverguhr/fullstop-punctuation-multilingual-sonar-base) |
| Dutch | [oliverguhr/fullstop-dutch-sonar-punctuation-prediction](https://huggingface.co/oliverguhr/fullstop-dutch-sonar-punctuation-prediction) |
### Community Models
| Languages | Model |
| ------------------------------------------ | ------------------------------------------------------------ |
|English, German, French, Spanish, Bulgarian, Italian, Polish, Dutch, Czech, Portugese, Slovak, Slovenian| [kredor/punctuate-all](https://huggingface.co/kredor/punctuate-all) |
| Catalan | [softcatala/fullstop-catalan-punctuation-prediction](https://huggingface.co/softcatala/fullstop-catalan-punctuation-prediction) |
You can use different models by setting the model parameter:
```python
model = PunctuationModel(model = "oliverguhr/fullstop-dutch-punctuation-prediction")
```
## How to cite us
```
@misc{https://doi.org/10.48550/arxiv.2301.03319,
doi = {10.48550/ARXIV.2301.03319},
url = {https://arxiv.org/abs/2301.03319},
author = {Vandeghinste, Vincent and Guhr, Oliver},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7},
title = {FullStop:Punctuation and Segmentation Prediction for Dutch with Transformers},
publisher = {arXiv},
year = {2023},
copyright = {Creative Commons Attribution Share Alike 4.0 International}
}
```
|
alexandrainst/da-ned-base | [
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab3000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab3000
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6852
- eval_wer: 0.3845
- eval_runtime: 71.297
- eval_samples_per_second: 9.846
- eval_steps_per_second: 1.234
- epoch: 24.22
- step: 8500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
DaWang/demo | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` |
Dablio/Dablio | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` |
DaisyMak/bert-finetuned-squad-accelerate-10epoch_transformerfrozen | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,907 | null | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` |
Daltcamalea01/Camaleaodalt | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERTFINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERTFINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7680
- Precision: 0.9838
- Recall: 0.6632
- F1: 0.7923
- Accuracy: 0.6624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 130 | 0.2980 | 0.9315 | 0.9533 | 0.9423 | 0.9081 |
| No log | 2.0 | 260 | 0.2053 | 0.9537 | 0.9626 | 0.9581 | 0.9338 |
| No log | 3.0 | 390 | 0.1873 | 0.9464 | 0.9907 | 0.9680 | 0.9485 |
| 0.3064 | 4.0 | 520 | 0.1811 | 0.9585 | 0.9720 | 0.9652 | 0.9449 |
| 0.3064 | 5.0 | 650 | 0.1887 | 0.9587 | 0.9766 | 0.9676 | 0.9485 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
DamolaMack/Classyfied | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2555
- Precision: 1.0
- Recall: 0.0200
- F1: 0.0393
- Accuracy: 0.0486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 95 | 0.5756 | nan | 0.0 | nan | 0.715 |
| No log | 2.0 | 190 | 0.5340 | 0.6429 | 0.1579 | 0.2535 | 0.735 |
| No log | 3.0 | 285 | 0.5298 | 0.5833 | 0.3684 | 0.4516 | 0.745 |
| No log | 4.0 | 380 | 0.5325 | 0.5789 | 0.3860 | 0.4632 | 0.745 |
| No log | 5.0 | 475 | 0.5452 | 0.4815 | 0.4561 | 0.4685 | 0.705 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
DarkKibble/DialoGPT-medium-Tankman | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 69.9 | 69.9 |
| test | 68.8 | 68.8 | |
DarkestSky/distilbert-base-uncased-finetuned-ner | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: _ctxSentence_TRAIN_all_TEST_french_second_train_set_french_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# _ctxSentence_TRAIN_all_TEST_french_second_train_set_french_False
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4936
- Precision: 0.8189
- Recall: 0.9811
- F1: 0.8927
- Accuracy: 0.8120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 13 | 0.5150 | 0.7447 | 1.0 | 0.8537 | 0.7447 |
| No log | 2.0 | 26 | 0.5565 | 0.7447 | 1.0 | 0.8537 | 0.7447 |
| No log | 3.0 | 39 | 0.5438 | 0.7778 | 1.0 | 0.8750 | 0.7872 |
| No log | 4.0 | 52 | 0.5495 | 0.7778 | 1.0 | 0.8750 | 0.7872 |
| No log | 5.0 | 65 | 0.5936 | 0.7778 | 1.0 | 0.8750 | 0.7872 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Darkrider/covidbert_mednli | [
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1185
- Rouge1: 17.2081
- Rouge2: 8.8374
- Rougel: 16.8033
- Rougelsum: 16.663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| No log | 1.0 | 303 | 3.9821 | 8.3993 | 2.0894 | 8.1427 | 8.135 |
| No log | 2.0 | 606 | 3.3511 | 13.1381 | 5.7193 | 12.8494 | 12.8375 |
| No log | 3.0 | 909 | 3.2235 | 15.2502 | 6.5903 | 14.728 | 14.612 |
| 5.8943 | 4.0 | 1212 | 3.1695 | 16.1725 | 8.1638 | 15.7655 | 15.6068 |
| 5.8943 | 5.0 | 1515 | 3.1579 | 16.3126 | 7.9727 | 15.8308 | 15.7236 |
| 5.8943 | 6.0 | 1818 | 3.1346 | 16.8323 | 8.088 | 16.3863 | 16.3343 |
| 5.8943 | 7.0 | 2121 | 3.1181 | 16.965 | 8.5799 | 16.6418 | 16.5064 |
| 3.7097 | 8.0 | 2424 | 3.1185 | 17.2081 | 8.8374 | 16.8033 | 16.663 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
DarshanDeshpande/marathi-distilbert | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"mr",
"dataset:Oscar Corpus, News, Stories",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: th
datasets:
- commonvoice
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/thai_commonvoice_blstm`
This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b
pip install -e .
cd egs2/commonvoice/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/thai_commonvoice_blstm
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Apr 18 11:05:12 EDT 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `5e6e95d087af8a7a4c33c4248b75114237eae64b`
- Commit date: `Mon Apr 4 21:04:45 2022 -0400`
## asr_train_asr_rnn_raw_th_bpe150_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_th|10769|14356|49.0|43.1|7.9|5.1|56.0|53.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_th|10769|348793|95.2|3.0|1.8|1.8|6.6|53.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_th|10769|278454|95.0|2.8|2.2|1.1|6.1|41.2|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_rnn.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_rnn_raw_th_bpe150_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 30
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_th_bpe150_sp/train/speech_shape
- exp/asr_stats_raw_th_bpe150_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_th_bpe150_sp/valid/speech_shape
- exp/asr_stats_raw_th_bpe150_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_th_sp/wav.scp
- speech
- sound
- - dump/raw/train_th_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_th/wav.scp
- speech
- sound
- - dump/raw/dev_th/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ▁
- น
- ร
- ก
- า
- เ
- อ
- ง
- ย
- ม
- ั
- ส
- ด
- บ
- ว
- ิ
- ล
- ค
- ต
- ห
- ่
- ท
- ้
- พ
- ช
- แ
- ี
- จ
- ะ
- ที่
- ุ
- ้า
- ู
- ์
- ป
- ข
- ไ
- การ
- โ
- ไม่
- ่อ
- ่า
- ็
- ื
- ํา
- ือ
- จะ
- มา
- ของ
- ได้
- เป็น
- ถ
- ีย
- มี
- ่ง
- ว่า
- ้อ
- ัน
- ใน
- ไป
- คุณ
- ▁ฉัน
- ัง
- เขา
- ความ
- ใ
- ผ
- หน
- ให้
- ทํา
- ศ
- ซ
- ึ
- นี้
- ฉัน
- มัน
- ี่
- ญ
- และ
- ประ
- ิน
- หล
- ษ
- ภ
- ธ
- ณ
- ฟ
- อย่าง
- เธอ
- '?'
- '"'
- ฐ
- '!'
- ฝ
- ฉ
- ฮ
- ๊
- ''''
- '-'
- ฒ
- ๆ
- ๋
- ฎ
- ฤ
- ฏ
- ฬ
- ฑ
- .
- ”
- ':'
- “
- ','
- ’
- ;
- ฌ
- E
- R
- O
- T
- N
- A
- I
- S
- F
- C
- '~'
- B
- K
- X
- L
- H
- M
- Y
- —
- J
- W
- ฃ
- _
- ฯ
- ํ
- U
- ๅ
- ‘
- G
- '|'
- P
- ฆ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.5
use_preprocessor: true
token_type: bpe
bpemodel: data/th_token_list/bpe_unigram150/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_th_bpe150_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: vgg_rnn
encoder_conf:
rnn_type: lstm
bidirectional: true
use_projection: true
num_layers: 4
hidden_size: 1024
output_size: 1024
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf:
num_layers: 2
hidden_size: 1024
sampling_probability: 0
att_conf:
atype: location
adim: 1024
aconv_chans: 10
aconv_filts: 100
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Daryaflp/roberta-retrained_ru_covid | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: id
datasets:
- commonvoice
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/id_commonvoice_blstm`
This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b
pip install -e .
cd egs2/commonvoice/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/id_commonvoice_blstm
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Apr 18 11:07:50 EDT 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `5e6e95d087af8a7a4c33c4248b75114237eae64b`
- Commit date: `Mon Apr 4 21:04:45 2022 -0400`
## asr_train_asr_rnn_tr_raw_id_bpe150_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_id|3608|21471|89.6|9.0|1.4|0.9|11.3|28.3|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_id|3608|139356|95.8|1.8|2.4|0.8|5.1|28.3|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_id|3608|72919|92.9|4.0|3.1|1.2|8.3|28.3|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_rnn_tr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_rnn_tr_raw_id_bpe150_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_id_bpe150_sp/train/speech_shape
- exp/asr_stats_raw_id_bpe150_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_id_bpe150_sp/valid/speech_shape
- exp/asr_stats_raw_id_bpe150_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_id_sp/wav.scp
- speech
- sound
- - dump/raw/train_id_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_id/wav.scp
- speech
- sound
- - dump/raw/dev_id/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ▁
- A
- .
- I
- K
- S
- U
- AN
- H
- E
- R
- T
- M
- P
- O
- NG
- N
- TA
- ▁DI
- ▁SE
- LA
- KAN
- NYA
- DA
- ▁KE
- C
- B
- SI
- ','
- ▁SAYA
- ER
- KA
- TI
- MA
- L
- RA
- ▁BER
- IN
- GA
- Y
- ▁MEN
- RI
- BU
- YANG
- NA
- JA
- TU
- MU
- LI
- SA
- ▁MA
- ANG
- KU
- BA
- AR
- ▁BA
- ▁INI
- ▁PER
- AT
- ▁PA
- LU
- ▁P
- GI
- ▁MEM
- DI
- EN
- ▁BE
- ▁TIDAK
- WA
- ▁DAN
- D
- ▁ME
- ▁KA
- ▁TER
- ▁SA
- '?'
- F
- ▁ITU
- DU
- ▁DIA
- AL
- HA
- J
- DE
- LE
- ▁PE
- ▁MENG
- ▁TE
- ▁DENGAN
- UN
- JU
- '-'
- GU
- G
- 'ON'
- ▁LA
- IL
- LAH
- OR
- ▁BI
- ▁UNTUK
- ▁DARI
- ▁KAMU
- ▁KO
- ▁APA
- ▁ADALAH
- ▁AKU
- V
- ▁TOM
- ▁SU
- ▁ADA
- ▁PEN
- MAN
- W
- ▁AKAN
- '""'
- MPA
- LO
- '"'
- GE
- ▁DALAM
- ▁TAHU
- JALAN
- ▁ORANG
- '!'
- Z
- ”
- X
- ''''
- Q
- ':'
- ;
- ’
- )
- –
- é
- —
- á
- \
- ‘
- (
- '['
- É
- ō
- ń
- ł
- “
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.5
use_preprocessor: true
token_type: bpe
bpemodel: data/id_token_list/bpe_unigram150/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_id_bpe150_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: vgg_rnn
encoder_conf:
rnn_type: lstm
bidirectional: true
use_projection: true
num_layers: 4
hidden_size: 1024
output_size: 1024
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf:
num_layers: 2
hidden_size: 1024
sampling_probability: 0
att_conf:
atype: location
adim: 1024
aconv_chans: 10
aconv_filts: 100
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
DataikuNLP/average_word_embeddings_glove.6B.300d | [
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"license:apache-2.0"
] | sentence-similarity | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: pt
datasets:
- commonvoice
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/pt_commonvoice_blstm`
This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b
pip install -e .
cd egs2/commonvoice/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/pt_commonvoice_blstm
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Apr 11 18:55:23 EDT 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `5e6e95d087af8a7a4c33c4248b75114237eae64b`
- Commit date: `Mon Apr 4 21:04:45 2022 -0400`
## asr_train_asr_rnn_raw_pt_bpe150_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.best/test_pt|4334|33716|84.7|12.4|2.9|1.3|16.6|46.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.best/test_pt|4334|191499|93.4|3.0|3.6|1.2|7.8|46.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.best/test_pt|4334|116003|90.4|5.7|3.9|1.5|11.1|46.9|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_rnn.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_rnn_raw_pt_bpe150_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 30
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_pt_bpe150_sp/train/speech_shape
- exp/asr_stats_raw_pt_bpe150_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_pt_bpe150_sp/valid/speech_shape
- exp/asr_stats_raw_pt_bpe150_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_pt_sp/wav.scp
- speech
- sound
- - dump/raw/train_pt_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_pt/wav.scp
- speech
- sound
- - dump/raw/dev_pt/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ▁
- S
- R
- I
- U
- E
- O
- A
- .
- N
- M
- L
- ▁A
- ▁DE
- RA
- ▁O
- T
- ▁E
- ▁UM
- C
- TA
- DO
- G
- TO
- TE
- DA
- VE
- B
- NDO
- ▁SE
- ▁QUE
- P
- ▁UMA
- LA
- D
- ▁COM
- CA
- á
- '?'
- ▁PE
- ▁EM
- IN
- TI
- IS
- ▁C
- H
- HO
- ▁CA
- ▁P
- CO
- ','
- ▁NO
- MA
- NTE
- PA
- ▁NãO
- DE
- ãO
- ▁ME
- ▁PARA
- Z
- ▁MA
- VA
- PO
- ▁DO
- ▁VOCê
- RI
- ▁DI
- GA
- VI
- ▁é
- LO
- IA
- ▁ELE
- ▁EU
- ▁ESTá
- HA
- ▁M
- X
- ▁NA
- NA
- é
- CE
- LE
- GO
- VO
- ▁RE
- ▁FO
- ▁FA
- ▁CO
- QUE
- ▁EST
- BE
- ▁CON
- ó
- SE
- ▁POR
- ê
- í
- çãO
- ▁DA
- RES
- ▁QUA
- ▁HOMEM
- RIA
- çA
- ▁SA
- V
- ▁PRE
- MENTE
- ZE
- NHA
- '-'
- ▁BA
- MOS
- ▁SO
- ▁BO
- ç
- '"'
- '!'
- ú
- ã
- K
- Y
- É
- W
- ô
- Á
- ':'
- ;
- ''''
- ”
- Ô
- ñ
- “
- Ú
- Í
- Ó
- ü
- À
- â
- à
- õ
- J
- Q
- F
- Â
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.5
use_preprocessor: true
token_type: bpe
bpemodel: data/pt_token_list/bpe_unigram150/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_pt_bpe150_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: vgg_rnn
encoder_conf:
rnn_type: lstm
bidirectional: true
use_projection: true
num_layers: 4
hidden_size: 1024
output_size: 1024
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf:
num_layers: 2
hidden_size: 1024
sampling_probability: 0
att_conf:
atype: location
adim: 1024
aconv_chans: 10
aconv_filts: 100
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
DataikuNLP/camembert-base | [
"pytorch",
"tf",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1911.03894",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"CamembertForMaskedLM"
],
"model_type": "camembert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab_3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1942
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.2975 | 3.52 | 500 | 3.1771 | 1.0 |
| 3.1468 | 7.04 | 1000 | 3.1917 | 1.0 |
| 3.147 | 10.56 | 1500 | 3.1784 | 1.0 |
| 3.1467 | 14.08 | 2000 | 3.1850 | 1.0 |
| 3.1446 | 17.61 | 2500 | 3.2022 | 1.0 |
| 3.1445 | 21.13 | 3000 | 3.2196 | 1.0 |
| 3.1445 | 24.65 | 3500 | 3.2003 | 1.0 |
| 3.1443 | 28.17 | 4000 | 3.1942 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Davlan/bert-base-multilingual-cased-finetuned-luo | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: afl-3.0
---
# 🍊 제주 방언 번역 모델 🍊
- 표준어 -> 제주어
- Made by. 구름 자연어처리 과정 3기 3조!!
- github link : https://github.com/Goormnlpteam3/JeBERT
## 1. Seq2Seq Transformer Model
- encoder : BertConfig
- decoder : BertConfig
- Tokenizer : WordPiece Tokenizer
## 2. Dataset
- Jit Dataset
- AI HUB(+아래아 문자)_v2
## 3. Hyper Parameters
- Epoch : 10 epochs(best at 7 epoch)
- Random Seed : 42
- Learning Rate : 5e-5
- Warm up Ratio : 0.1
- Batch Size : 32
## 4. BLEU Score
- Jit + AI HUB(+아래아 문자) Dataset : 67.6
---
### CREDIT
- 주형준 : [email protected]
- 강가람 : [email protected]
- 고광연 : [email protected]
- 김수연 : [email protected]
- 이원경 : [email protected]
- 조성은 : [email protected] |
Davlan/bert-base-multilingual-cased-finetuned-swahili | [
"pytorch",
"tf",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 67 | null | ---
license: gpl-3.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-base-chinese-finetuned-job-resume
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-base-chinese-finetuned-job-resume
This model is a fine-tuned version of [ckiplab/gpt2-base-chinese](https://huggingface.co/ckiplab/gpt2-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 480 | 2.3271 |
| 2.4967 | 2.0 | 960 | 2.2729 |
| 2.2259 | 3.0 | 1440 | 2.2658 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Davlan/bert-base-multilingual-cased-ner-hrl | [
"pytorch",
"tf",
"bert",
"token-classification",
"transformers",
"autotrain_compatible",
"has_space"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 269,898 | null | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- tamil
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/tamil_slu`
This model was trained by Sujay S Kumar using tamil recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 395bda6123ae268f991e5ef1dab887b6e677974a
pip install -e .
cd egs2/tamil/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/tamil_slu
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Oct 3 20:59:46 EDT 2021`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.3a3`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `b41391336042a4876e30d9fe5c66afb4e4be404c`
- Commit date: `Wed Sep 22 10:02:03 2021 -0400`
## asr_train_asr_wav2vec2_xlsr_raw_word
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_5best/test|80|372|70.4|22.6|7.0|3.2|32.8|56.3|
|inference_asr_model_valid.acc.ave_5best/valid|80|372|70.4|22.6|7.0|3.2|32.8|56.3|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_5best/test|80|3234|85.9|8.2|5.9|5.5|19.6|56.3|
|inference_asr_model_valid.acc.ave_5best/valid|80|3234|85.9|8.2|5.9|5.5|19.6|56.3|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_wav2vec2_xlsr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp_train_asr_wav2vec2_xlsr/asr_train_asr_wav2vec2_xlsr_raw_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 250
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models: 5
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp_train_asr_wav2vec2_xlsr/asr_stats_raw_word/train/speech_shape
- exp_train_asr_wav2vec2_xlsr/asr_stats_raw_word/train/text_shape.word
valid_shape_file:
- exp_train_asr_wav2vec2_xlsr/asr_stats_raw_word/valid/speech_shape
- exp_train_asr_wav2vec2_xlsr/asr_stats_raw_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid/wav.scp
- speech
- sound
- - dump/raw/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 5000
token_list:
- <blank>
- <unk>
- காசு
- வேணும்
- Request_Acc_balance
- Account
- Money_deposit
- Money_withdraw
- Credit_card_payments
- card
- மீதி
- Money_transfer
- எவ்வளோ
- Bill_payments
- Credit
- கட்ட
- எவ்வளவு
- காச
- கட்டவேணும்
- இந்த
- Balance
- வேண்டும்
- போடோணும்
- கணக்கு
- செய்ய
- Bill
- போட
- account
- மாத்த
- credit
- pay
- பண்ணோணும்
- Deposit
- மீளெடுக்க
- வைப்பு
- எடுக்கவேணும்
- ல
- இருக்கிற
- எடுக்கணும்
- இல
- இருந்து
- மற்ற
- accountக்கு
- balance
- என்ன
- bill
- அ
- ஒருக்கா
- ஏலுமோ
- deposit
- பண்ண
- payment
- Account-la
- காசெடுக்கோணும்
- அனுப்பவேணும்
- காசெடுக்க
- இன்னொரு
- கு
- Cash
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_xlsr
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 4
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.3a3
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Davlan/byt5-base-yor-eng-mt | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/hot_domme/1652063339945/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1445280995175911425/JkWNc3mK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">™STREET DON 🥬⛓🦂غعتس دتعد🦂⛓ Steamin Hot</div>
<div style="text-align: center; font-size: 14px;">@hot_domme</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ™STREET DON 🥬⛓🦂غعتس دتعد🦂⛓ Steamin Hot.
| Data | ™STREET DON 🥬⛓🦂غعتس دتعد🦂⛓ Steamin Hot |
| --- | --- |
| Tweets downloaded | 2733 |
| Retweets | 324 |
| Short tweets | 371 |
| Tweets kept | 2038 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cv5ajux/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hot_domme's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2znfpdzh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2znfpdzh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hot_domme')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Davlan/mT5_base_yoruba_adr | [
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2003.10564",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: paraphraser-spanish-t5-small
results: []
datasets:
- paws-x
- tapaco
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paraphraser-spanish-t5-small
This model is a fine-tuned version of [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1079
- eval_runtime: 4.9573
- eval_samples_per_second: 365.924
- eval_steps_per_second: 36.713
- epoch: 0.83
- step: 43141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2 |
Davlan/mbart50-large-eng-yor-mt | [
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
title: Real Cascade U-Nets for Anime Image Super Resolution
emoji: 👀
colorFrom: blue
colorTo: green
sdk: gradio
app_file: app.py
pinned: true
license: mit
---
> From <https://github.com/bilibili/ailab/tree/main/Real-CUGAN>
# Configuration
`title`: _string_
Display title for the Space
`emoji`: _string_
Space emoji (emoji-only character allowed)
`colorFrom`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`colorTo`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`sdk`: _string_
Can be either `gradio`, `streamlit`, or `static`
`sdk_version` : _string_
Only applicable for `streamlit` SDK.
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
`app_file`: _string_
Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
Path is relative to the root of the repository.
`models`: _List[string]_
HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
Will be parsed automatically from your code if not specified here.
`datasets`: _List[string]_
HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
Will be parsed automatically from your code if not specified here.
`pinned`: _boolean_
Whether the Space stays on top of your list.
|
Davlan/xlm-roberta-base-finetuned-english | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- uk
license: cc-by-nc-sa-4.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- uk
xdatasets:
- mozilla-foundation/common_voice_7_0
---
# Ukrainian STT model (with the Big Language Model formed on News Dataset)
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UK dataset.
Attribution to the dataset of Language Model:
- Chaplynskyi, D. et al. (2021) lang-uk Ukrainian Ubercorpus [Data set]. https://lang.org.ua/uk/corpora/#anchor4
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.2815 | 7.93 | 500 | 0.3536 | 0.4753 | 0.1009 |
| 1.0869 | 15.86 | 1000 | 0.2317 | 0.3111 | 0.0614 |
| 0.9984 | 23.8 | 1500 | 0.2022 | 0.2676 | 0.0521 |
| 0.975 | 31.74 | 2000 | 0.1948 | 0.2469 | 0.0487 |
| 0.9306 | 39.67 | 2500 | 0.1916 | 0.2377 | 0.0464 |
| 0.8868 | 47.61 | 3000 | 0.1903 | 0.2257 | 0.0439 |
| 0.8424 | 55.55 | 3500 | 0.1786 | 0.2206 | 0.0423 |
| 0.8126 | 63.49 | 4000 | 0.1849 | 0.2160 | 0.0416 |
| 0.7901 | 71.42 | 4500 | 0.1869 | 0.2138 | 0.0413 |
| 0.7671 | 79.36 | 5000 | 0.1855 | 0.2075 | 0.0394 |
| 0.7467 | 87.3 | 5500 | 0.1884 | 0.2049 | 0.0389 |
| 0.731 | 95.24 | 6000 | 0.1877 | 0.2060 | 0.0387 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
Declan/ChicagoTribune_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/lonelythey18/1651554075248/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488171735174238211/4Y7YAhJG_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Cara</div>
<div style="text-align: center; font-size: 14px;">@lonelythey18</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Cara.
| Data | Cara |
| --- | --- |
| Tweets downloaded | 2640 |
| Retweets | 301 |
| Short tweets | 500 |
| Tweets kept | 1839 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3l0t3r5o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lonelythey18's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1znlhqjr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1znlhqjr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lonelythey18')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Declan/ChicagoTribune_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1490143959540133891/C-DLhhNQ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Random Small Streamer Chick</div>
<div style="text-align: center; font-size: 14px;">@irenegellar</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Random Small Streamer Chick.
| Data | Random Small Streamer Chick |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 331 |
| Short tweets | 472 |
| Tweets kept | 2438 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ns8qkzx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @irenegellar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2fvfz3ir) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2fvfz3ir/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/irenegellar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Declan/FoxNews_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-nostop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-nostop
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0701
- Accuracy: 0.9888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.125 | 1.0 | 1116 | 0.0975 | 0.9743 |
| 0.0599 | 2.0 | 2232 | 0.0692 | 0.9840 |
| 0.0191 | 3.0 | 3348 | 0.0570 | 0.9871 |
| 0.0109 | 4.0 | 4464 | 0.0660 | 0.9882 |
| 0.0092 | 5.0 | 5580 | 0.0701 | 0.9888 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Declan/FoxNews_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
language:
- it
datasets:
- custom
---
# it5-efficient-small-lfqa
It is a T5 ([IT5](https://huggingface.co/stefan-it/it5-efficient-small-el32)) efficient small model trained on a lfqa dataset.
<p align="center">
<img src="https://www.marcorossiartecontemporanea.net/wp-content/uploads/2021/04/MARCTM0413-9CFBn1gs-scaled.jpg" width="400"> </br>
Mirco Marchelli, Voce in capitolo, 2019
</p>
## Training Data
This model was trained on a lfqa dataset. The model provides long-form answers to open domain questions.
## Usage and Performance
```python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("efederici/it5-efficient-small-lfqa")
model = AutoModelForSeq2SeqLM.from_pretrained("efederici/it5-efficient-small-lfqa")
query = "con chi si era messo in contatto elon musk?"
# concatenated texts/document text
doc = """
La notizia dell’acquisizione da parte di Elon Musk del 9,2 per cento delle azioni di Twitter e del suo successivo ingresso nel consiglio di amministrazione della società hanno attirato grandi attenzioni, non solo da parte degli analisti finanziari, ma anche di chi si occupa di social media e del modo in cui viene impiegata la piattaforma da centinaia di milioni di persone in tutto il mondo. Musk, che ha un grande seguito su Twitter, in passato aveva più volte criticato il social network, accusandolo di non tutelare a sufficienza le libertà di espressione, anche in casi limite come l’assalto al Congresso degli Stati Uniti del 2021.
Alcune settimane fa, Musk si era messo in contatto con Parag Agrawal, CEO di Twitter da fine novembre 2021, e con il suo predecessore e cofondatore della società, Jack Dorsey, annunciando di avere avviato l’acquisizione di alcune quote dell’azienda e di essere disponibile per discutere di soluzioni per migliorarla. Secondo fonti del New York Times, dopo i primi contatti, Agrawal aveva proposto a Musk di avere un ruolo più attivo oltre a quello di azionista, offrendogli la possibilità di entrare nel consiglio di amministrazione.
"""
query_and_docs = f"Domanda: {query} Contesto: {doc}"
model_input = tokenizer(query_and_docs, truncation=True, padding=True, return_tensors="pt")
output = model.generate(
input_ids=model_input["input_ids"],
attention_mask=model_input["attention_mask"],
min_length=10,
max_length=256,
do_sample=False,
early_stopping=True,
num_beams=8,
temperature=1.0,
top_k=None,
top_p=None,
no_repeat_ngram_size=3,
num_return_sequences=1
)
tokenizer.batch_decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
```
The model will predict: 'Elon Musk si era messo in contatto con Parag Agrawal, CEO di Twitter da fine novembre 2021 e con il suo predecessore e cofondatore della società, Jack Dorsey, annunciando di avere avviato l’acquisizione di alcune quote dell’azienda e di essere disponibile per discutere soluzioni per migliorarla.' |
Declan/HuffPost_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-05-03T07:54:05Z |
---
language: et
license: cc-by-4.0
widget:
- text: "Eesti President on Alar Karis."
---
# Estonian NER model based on EstBERT
This model is a fine-tuned version of [tartuNLP/EstBERT](https://huggingface.co/tartuNLP/EstBERT) on the Estonian NER dataset. The model was trained by tartuNLP, the NLP research group at the institute of Computer Science at the University of Tartu.
It achieves the following results on the test set:
- Loss: 0.3565
- Precision: 0.7612
- Recall: 0.7744
- F1: 0.7678
- Accuracy: 0.9672
The entity-level results are as follows:
| | Precision | Recall | F1 | Number |
|---------| --------- | ------- | ------- | ------- |
| DATE | 0.7278 | 0.7258 | 0.7268 | 372 |
| EVENT | 0.3721 | 0.5714 | 0.4507 | 28 |
| GPE | 0.8679 | 0.8369 | 0.8521 | 840 |
| LOC | 0.6545 | 0.4832 | 0.5560 | 149 |
| MONEY | 0.6625 | 0.6023 | 0.6310 | 88 |
| ORG | 0.6761 | 0.7267 | 0.7005 | 589 |
| PER | 0.8255 | 0.9068 | 0.8642 | 751 |
| PERCENT | 1.0 | 0.9589 | 0.9790 | 73 |
| PROD | 0.6030 | 0.5430 | 0.5714 | 221 |
| TIME | 0.5682 | 0.5556 | 0.5618 | 45 |
| TITLE | 0.7 | 0.8063 | 0.7494 | 191 |
## How to use
You can use this model with Transformers pipeline for NER. Post-processing of results may be necessary as the model occasionally tags subword tokens as entities.
```
from transformers import BertTokenizer, BertForTokenClassification
from transformers import pipeline
tokenizer = BertTokenizer.from_pretrained('tartuNLP/EstBERT_NER')
bertner = BertForTokenClassification.from_pretrained('tartuNLP/EstBERT_NER')
nlp = pipeline("ner", model=bertner, tokenizer=tokenizer)
text = "Kaia Kanepi (WTA 57.) langes USA-s Charlestonis toimuval WTA 500 kategooria tenniseturniiril konkurentsist kaheksandikfinaalis, kaotades poolatarile Magda Linette'ile (WTA 64.) 3 : 6, 6 : 4, 2 : 6."
ner_results = nlp(text)
tokens=tokenizer(text)
tokens=tokenizer.convert_ids_to_tokens(tokens['input_ids'])
print(f'tokens: {tokens}')
print(f'NER model:{ner_results}')
```
```
tokens: ['[CLS]', 'kai', '##a', 'kanepi', '(', 'w', '##ta', '57', '.', ')', 'langes', 'usa', '-', 's', 'cha', '##rl', '##est', '##onis', 'toimuval', 'w', '##ta', '500', 'kategooria', 'tennise', '##turniiril', 'konkurentsist', 'kaheksandik', '##finaalis', ',', 'kaotades', 'poola', '##tari', '##le', 'ma', '##gda', 'line', '##tte', "'", 'ile', '(', 'w', '##ta', '64', '.', ')', '3', ':', '6', ',', '6', ':', '4', ',', '2', ':', '6', '.', '[SEP]']
```
```
NER model: [{'entity': 'B-PER', 'score': 0.99999887, 'index': 1, 'word': 'kai', 'start': None, 'end': None}, {'entity': 'B-PER', 'score': 0.97371966, 'index': 2, 'word': '##a', 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.99999815, 'index': 3, 'word': 'kanepi', 'start': None, 'end': None}, {'entity': 'B-ORG', 'score': 0.63085276, 'index': 5, 'word': 'w', 'start': None, 'end': None}, {'entity': 'B-GPE', 'score': 0.99999934, 'index': 11, 'word': 'usa', 'start': None, 'end': None}, {'entity': 'B-GPE', 'score': 0.9999685, 'index': 14, 'word': 'cha', 'start': None, 'end': None}, {'entity': 'I-GPE', 'score': 0.8875574, 'index': 15, 'word': '##rl', 'start': None, 'end': None}, {'entity': 'I-GPE', 'score': 0.9996168, 'index': 16, 'word': '##est', 'start': None, 'end': None}, {'entity': 'I-GPE', 'score': 0.9992657, 'index': 17, 'word': '##onis', 'start': None, 'end': None}, {'entity': 'B-EVENT', 'score': 0.99999064, 'index': 19, 'word': 'w', 'start': None, 'end': None}, {'entity': 'I-EVENT', 'score': 0.9772493, 'index': 20, 'word': '##ta', 'start': None, 'end': None}, {'entity': 'I-EVENT', 'score': 0.99999076, 'index': 21, 'word': '500', 'start': None, 'end': None}, {'entity': 'I-EVENT', 'score': 0.99955636, 'index': 22, 'word': 'kategooria', 'start': None, 'end': None}, {'entity': 'B-TITLE', 'score': 0.8771319, 'index': 30, 'word': 'poola', 'start': None, 'end': None}, {'entity': 'B-PER', 'score': 0.99999785, 'index': 33, 'word': 'ma', 'start': None, 'end': None}, {'entity': 'B-PER', 'score': 0.9998398, 'index': 34, 'word': '##gda', 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.9999987, 'index': 35, 'word': 'line', 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.9999976, 'index': 36, 'word': '##tte', 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.99999285, 'index': 37, 'word': "'", 'start': None, 'end': None}, {'entity': 'I-PER', 'score': 0.9999794, 'index': 38, 'word': 'ile', 'start': None, 'end': None}, {'entity': 'B-ORG', 'score': 0.7664479, 'index': 40, 'word': 'w', 'start': None, 'end': None}]
```
## Intended uses & limitations
This model can be used to find named entities from Estonian texts. The model is free to use for anyone. TartuNLP does not guarantee that the model is useful for anyone or anything. TartuNLP is not responsible for any results it generates.
## Training and evaluation data
The model was trained on two Estonian NER datasets:
- [The Reannotated Estonian NER corpus](https://metashare.ut.ee/repository/browse/reannotated-estonian-ner-corpus/bd43f1f614a511eca6e4fa163e9d45477d086613d2894fd5af79bf13e3f13594/)
- [The New Estonian NER corpus](https://metashare.ut.ee/repository/browse/new-estonian-ner-corpus/98b6706c963c11eba6e4fa163e9d45470bcd0533b6994c93ab8b8c628516ffed/)
Both datasets have been annotated with the same annotation scheme. For training this model, the datasets were joined.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1024
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: polynomial
- max num_epochs: 150
- early stopping limit: 20
- early stopping tol: 0.0001
- mixed_precision_training: Native AMP
### Training results
The final model was saved after epoch 53 (shown in bold) where the overall F1 was the highest on the development set.
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Date Precision | Date Recall | Date F1 | Date Number | Event Precision | Event Recall | Event F1 | Event Number | Gpe Precision | Gpe Recall | Gpe F1 | Gpe Number | Loc Precision | Loc Recall | Loc F1 | Loc Number | Money Precision | Money Recall | Money F1 | Money Number | Org Precision | Org Recall | Org F1 | Org Number | Per Precision | Per Recall | Per F1 | Per Number | Percent Precision | Percent Recall | Percent F1 | Percent Number | Prod Precision | Prod Recall | Prod F1 | Prod Number | Time Precision | Time Recall | Time F1 | Time Number | Title Precision | Title Recall | Title F1 | Title Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|:--------------:|:-----------:|:-------:|:-----------:|:---------------:|:------------:|:--------:|:------------:|:-------------:|:----------:|:------:|:----------:|:-------------:|:----------:|:------:|:----------:|:---------------:|:------------:|:--------:|:------------:|:-------------:|:----------:|:------:|:----------:|:-------------:|:----------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:--------------:|:--------------:|:-----------:|:-------:|:-----------:|:--------------:|:-----------:|:-------:|:-----------:|:---------------:|:------------:|:--------:|:------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.3252 | 1 | 1061 | 0.1628 | 0.6835 | 0.6083 | 0.6437 | 0.9526 | 0.5910 | 0.6022 | 0.5965 | 372 | 0.0 | 0.0 | 0.0 | 28 | 0.8073 | 0.7631 | 0.7846 | 840 | 0.1389 | 0.0336 | 0.0541 | 149 | 0.4217 | 0.3977 | 0.4094 | 88 | 0.5381 | 0.5280 | 0.5330 | 589 | 0.7917 | 0.8655 | 0.8270 | 751 | 0.6471 | 0.3014 | 0.4112 | 73 | 0.2581 | 0.0724 | 0.1131 | 221 | 0.1429 | 0.0889 | 0.1096 | 45 | 0.7805 | 0.6702 | 0.7211 | 191 | 0.6835 | 0.6083 | 0.6437 | 0.9526 |
| 0.1513 | 2 | 2122 | 0.1332 | 0.6906 | 0.7329 | 0.7111 | 0.9615 | 0.6185 | 0.7366 | 0.6724 | 372 | 0.0857 | 0.1071 | 0.0952 | 28 | 0.7874 | 0.8595 | 0.8219 | 840 | 0.4767 | 0.2752 | 0.3489 | 149 | 0.6848 | 0.7159 | 0.7000 | 88 | 0.6158 | 0.6231 | 0.6194 | 589 | 0.7770 | 0.9001 | 0.8341 | 751 | 0.9565 | 0.9041 | 0.9296 | 73 | 0.5 | 0.3620 | 0.4199 | 221 | 0.3571 | 0.3333 | 0.3448 | 45 | 0.6033 | 0.7644 | 0.6744 | 191 | 0.6906 | 0.7329 | 0.7111 | 0.9615 |
| 0.1131 | 3 | 3183 | 0.1281 | 0.7224 | 0.7338 | 0.7280 | 0.9638 | 0.7054 | 0.7339 | 0.7194 | 372 | 0.1053 | 0.1429 | 0.1212 | 28 | 0.8013 | 0.85 | 0.8250 | 840 | 0.5476 | 0.3087 | 0.3948 | 149 | 0.6386 | 0.6023 | 0.6199 | 88 | 0.6371 | 0.6469 | 0.6420 | 589 | 0.8235 | 0.8762 | 0.8490 | 751 | 0.9859 | 0.9589 | 0.9722 | 73 | 0.5148 | 0.3937 | 0.4462 | 221 | 0.5116 | 0.4889 | 0.5 | 45 | 0.6245 | 0.7749 | 0.6916 | 191 | 0.7224 | 0.7338 | 0.7280 | 0.9638 |
| 0.0884 | 4 | 4244 | 0.1354 | 0.7283 | 0.7386 | 0.7334 | 0.9639 | 0.6785 | 0.6694 | 0.6739 | 372 | 0.1795 | 0.25 | 0.2090 | 28 | 0.8231 | 0.8310 | 0.8270 | 840 | 0.6020 | 0.3960 | 0.4777 | 149 | 0.6092 | 0.6023 | 0.6057 | 88 | 0.6473 | 0.7012 | 0.6732 | 589 | 0.8351 | 0.8628 | 0.8487 | 751 | 1.0 | 0.9726 | 0.9861 | 73 | 0.5899 | 0.4751 | 0.5263 | 221 | 0.4524 | 0.4222 | 0.4368 | 45 | 0.6 | 0.7853 | 0.6803 | 191 | 0.7283 | 0.7386 | 0.7334 | 0.9639 |
| 0.0685 | 5 | 5305 | 0.1383 | 0.7224 | 0.7696 | 0.7453 | 0.9644 | 0.6635 | 0.7473 | 0.7029 | 372 | 0.26 | 0.4643 | 0.3333 | 28 | 0.8259 | 0.8357 | 0.8308 | 840 | 0.5913 | 0.4564 | 0.5152 | 149 | 0.6437 | 0.6364 | 0.64 | 88 | 0.6540 | 0.7284 | 0.6892 | 589 | 0.8070 | 0.8961 | 0.8492 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5693 | 0.5204 | 0.5437 | 221 | 0.5192 | 0.6 | 0.5567 | 45 | 0.6320 | 0.7644 | 0.6919 | 191 | 0.7224 | 0.7696 | 0.7453 | 0.9644 |
| 0.0532 | 6 | 6366 | 0.1493 | 0.7099 | 0.7613 | 0.7347 | 0.9631 | 0.6727 | 0.6962 | 0.6843 | 372 | 0.2308 | 0.5357 | 0.3226 | 28 | 0.8242 | 0.8262 | 0.8252 | 840 | 0.5877 | 0.4497 | 0.5095 | 149 | 0.6410 | 0.5682 | 0.6024 | 88 | 0.6232 | 0.7470 | 0.6795 | 589 | 0.8087 | 0.8895 | 0.8472 | 751 | 0.9672 | 0.8082 | 0.8806 | 73 | 0.5107 | 0.5385 | 0.5242 | 221 | 0.6190 | 0.5778 | 0.5977 | 45 | 0.6371 | 0.7906 | 0.7056 | 191 | 0.7099 | 0.7613 | 0.7347 | 0.9631 |
| 0.0403 | 7 | 7427 | 0.1592 | 0.7239 | 0.7592 | 0.7411 | 0.9642 | 0.6923 | 0.7016 | 0.6969 | 372 | 0.2857 | 0.5714 | 0.3810 | 28 | 0.8272 | 0.8262 | 0.8267 | 840 | 0.5752 | 0.4362 | 0.4962 | 149 | 0.6265 | 0.5909 | 0.6082 | 88 | 0.6402 | 0.6978 | 0.6677 | 589 | 0.8404 | 0.8762 | 0.8579 | 751 | 0.9859 | 0.9589 | 0.9722 | 73 | 0.5257 | 0.6018 | 0.5612 | 221 | 0.5870 | 0.6 | 0.5934 | 45 | 0.6235 | 0.8063 | 0.7032 | 191 | 0.7239 | 0.7592 | 0.7411 | 0.9642 |
| 0.0304 | 8 | 8488 | 0.1738 | 0.7301 | 0.7484 | 0.7392 | 0.9644 | 0.6866 | 0.6774 | 0.6820 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8393 | 0.8083 | 0.8235 | 840 | 0.5882 | 0.4698 | 0.5224 | 149 | 0.6429 | 0.6136 | 0.6279 | 88 | 0.6608 | 0.6978 | 0.6788 | 589 | 0.8268 | 0.8708 | 0.8482 | 751 | 0.9595 | 0.9726 | 0.9660 | 73 | 0.5351 | 0.5520 | 0.5434 | 221 | 0.5208 | 0.5556 | 0.5376 | 45 | 0.6204 | 0.7958 | 0.6972 | 191 | 0.7301 | 0.7484 | 0.7392 | 0.9644 |
| 0.0234 | 9 | 9549 | 0.1860 | 0.7248 | 0.7625 | 0.7432 | 0.9641 | 0.6947 | 0.7097 | 0.7021 | 372 | 0.2963 | 0.5714 | 0.3902 | 28 | 0.8317 | 0.8298 | 0.8308 | 840 | 0.5913 | 0.4564 | 0.5152 | 149 | 0.6118 | 0.5909 | 0.6012 | 88 | 0.6361 | 0.7063 | 0.6693 | 589 | 0.8410 | 0.8735 | 0.8570 | 751 | 0.9859 | 0.9589 | 0.9722 | 73 | 0.5212 | 0.6109 | 0.5625 | 221 | 0.5417 | 0.5778 | 0.5591 | 45 | 0.6414 | 0.7958 | 0.7103 | 191 | 0.7248 | 0.7625 | 0.7432 | 0.9641 |
| 0.0178 | 10 | 10610 | 0.2037 | 0.7434 | 0.7383 | 0.7408 | 0.9640 | 0.7159 | 0.6774 | 0.6961 | 372 | 0.2857 | 0.4286 | 0.3429 | 28 | 0.8333 | 0.8333 | 0.8333 | 840 | 0.6262 | 0.4497 | 0.5234 | 149 | 0.6324 | 0.4886 | 0.5513 | 88 | 0.6568 | 0.6757 | 0.6661 | 589 | 0.8291 | 0.8722 | 0.8501 | 751 | 1.0 | 0.8219 | 0.9023 | 73 | 0.5672 | 0.5158 | 0.5403 | 221 | 0.5 | 0.5333 | 0.5161 | 45 | 0.6952 | 0.7644 | 0.7282 | 191 | 0.7434 | 0.7383 | 0.7408 | 0.9640 |
| 0.0147 | 11 | 11671 | 0.2114 | 0.7440 | 0.7233 | 0.7335 | 0.9643 | 0.7009 | 0.6613 | 0.6805 | 372 | 0.3030 | 0.3571 | 0.3279 | 28 | 0.8352 | 0.8024 | 0.8185 | 840 | 0.6238 | 0.4228 | 0.504 | 149 | 0.65 | 0.5909 | 0.6190 | 88 | 0.6436 | 0.6469 | 0.6452 | 589 | 0.8407 | 0.8575 | 0.8490 | 751 | 0.9315 | 0.9315 | 0.9315 | 73 | 0.5812 | 0.5023 | 0.5388 | 221 | 0.5476 | 0.5111 | 0.5287 | 45 | 0.6835 | 0.7801 | 0.7286 | 191 | 0.7440 | 0.7233 | 0.7335 | 0.9643 |
| 0.0118 | 12 | 12732 | 0.2218 | 0.7331 | 0.7532 | 0.7430 | 0.9649 | 0.7119 | 0.6909 | 0.7012 | 372 | 0.3488 | 0.5357 | 0.4225 | 28 | 0.8325 | 0.8405 | 0.8365 | 840 | 0.5303 | 0.4698 | 0.4982 | 149 | 0.65 | 0.5909 | 0.6190 | 88 | 0.6690 | 0.6587 | 0.6638 | 589 | 0.8178 | 0.8908 | 0.8528 | 751 | 0.9677 | 0.8219 | 0.8889 | 73 | 0.5408 | 0.5701 | 0.5551 | 221 | 0.5102 | 0.5556 | 0.5319 | 45 | 0.6567 | 0.8010 | 0.7217 | 191 | 0.7331 | 0.7532 | 0.7430 | 0.9649 |
| 0.0093 | 13 | 13793 | 0.2283 | 0.7495 | 0.7359 | 0.7427 | 0.9644 | 0.7163 | 0.6989 | 0.7075 | 372 | 0.3810 | 0.5714 | 0.4571 | 28 | 0.8612 | 0.7905 | 0.8243 | 840 | 0.6111 | 0.4430 | 0.5136 | 149 | 0.6145 | 0.5795 | 0.5965 | 88 | 0.6775 | 0.6740 | 0.6757 | 589 | 0.8346 | 0.8802 | 0.8568 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5619 | 0.5339 | 0.5476 | 221 | 0.4 | 0.4889 | 0.4400 | 45 | 0.6812 | 0.7382 | 0.7085 | 191 | 0.7495 | 0.7359 | 0.7427 | 0.9644 |
| 0.0079 | 14 | 14854 | 0.2383 | 0.7371 | 0.7490 | 0.7430 | 0.9647 | 0.6727 | 0.7016 | 0.6868 | 372 | 0.3261 | 0.5357 | 0.4054 | 28 | 0.8453 | 0.8 | 0.8220 | 840 | 0.5963 | 0.4362 | 0.5039 | 149 | 0.625 | 0.5682 | 0.5952 | 88 | 0.6634 | 0.6927 | 0.6777 | 589 | 0.8433 | 0.8815 | 0.8620 | 751 | 0.9853 | 0.9178 | 0.9504 | 73 | 0.5427 | 0.5747 | 0.5582 | 221 | 0.5814 | 0.5556 | 0.5682 | 45 | 0.6513 | 0.8115 | 0.7226 | 191 | 0.7371 | 0.7490 | 0.7430 | 0.9647 |
| 0.0068 | 15 | 15915 | 0.2511 | 0.7255 | 0.7359 | 0.7306 | 0.9639 | 0.6826 | 0.6532 | 0.6676 | 372 | 0.3590 | 0.5 | 0.4179 | 28 | 0.8295 | 0.8167 | 0.8230 | 840 | 0.5263 | 0.4698 | 0.4965 | 149 | 0.6575 | 0.5455 | 0.5963 | 88 | 0.6549 | 0.6604 | 0.6577 | 589 | 0.8242 | 0.8802 | 0.8513 | 751 | 0.9833 | 0.8082 | 0.8872 | 73 | 0.5398 | 0.5520 | 0.5459 | 221 | 0.36 | 0.4 | 0.3789 | 45 | 0.6511 | 0.8010 | 0.7183 | 191 | 0.7255 | 0.7359 | 0.7306 | 0.9639 |
| 0.0061 | 16 | 16976 | 0.2497 | 0.7253 | 0.7690 | 0.7465 | 0.9648 | 0.6824 | 0.6989 | 0.6906 | 372 | 0.3333 | 0.5357 | 0.4110 | 28 | 0.8473 | 0.8321 | 0.8396 | 840 | 0.4583 | 0.5168 | 0.4858 | 149 | 0.6494 | 0.5682 | 0.6061 | 88 | 0.6556 | 0.7368 | 0.6938 | 589 | 0.8382 | 0.8828 | 0.8599 | 751 | 0.9841 | 0.8493 | 0.9118 | 73 | 0.5341 | 0.6380 | 0.5814 | 221 | 0.5 | 0.5333 | 0.5161 | 45 | 0.6622 | 0.7801 | 0.7163 | 191 | 0.7253 | 0.7690 | 0.7465 | 0.9648 |
| 0.0054 | 17 | 18037 | 0.2554 | 0.7323 | 0.7625 | 0.7471 | 0.9650 | 0.6870 | 0.6962 | 0.6916 | 372 | 0.3421 | 0.4643 | 0.3939 | 28 | 0.8463 | 0.8262 | 0.8361 | 840 | 0.5902 | 0.4832 | 0.5314 | 149 | 0.6753 | 0.5909 | 0.6303 | 88 | 0.6640 | 0.7148 | 0.6885 | 589 | 0.8317 | 0.8948 | 0.8621 | 751 | 0.9437 | 0.9178 | 0.9306 | 73 | 0.5210 | 0.5611 | 0.5403 | 221 | 0.5 | 0.5111 | 0.5055 | 45 | 0.6102 | 0.8115 | 0.6966 | 191 | 0.7323 | 0.7625 | 0.7471 | 0.9650 |
| 0.005 | 18 | 19098 | 0.2601 | 0.7273 | 0.7747 | 0.7503 | 0.9654 | 0.6970 | 0.7608 | 0.7275 | 372 | 0.2830 | 0.5357 | 0.3704 | 28 | 0.8320 | 0.8488 | 0.8403 | 840 | 0.5841 | 0.4430 | 0.5038 | 149 | 0.6477 | 0.6477 | 0.6477 | 88 | 0.6378 | 0.6995 | 0.6672 | 589 | 0.8501 | 0.8908 | 0.8700 | 751 | 0.9722 | 0.9589 | 0.9655 | 73 | 0.5323 | 0.5973 | 0.5629 | 221 | 0.4444 | 0.4444 | 0.4444 | 45 | 0.624 | 0.8168 | 0.7075 | 191 | 0.7273 | 0.7747 | 0.7503 | 0.9654 |
| 0.0044 | 19 | 20159 | 0.2602 | 0.7369 | 0.7616 | 0.7490 | 0.9656 | 0.7124 | 0.7124 | 0.7124 | 372 | 0.3415 | 0.5 | 0.4058 | 28 | 0.8239 | 0.8631 | 0.8430 | 840 | 0.6355 | 0.4564 | 0.5313 | 149 | 0.6667 | 0.6136 | 0.6391 | 88 | 0.6517 | 0.6638 | 0.6577 | 589 | 0.8405 | 0.8842 | 0.8618 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5144 | 0.5656 | 0.5388 | 221 | 0.5217 | 0.5333 | 0.5275 | 45 | 0.6550 | 0.7853 | 0.7143 | 191 | 0.7369 | 0.7616 | 0.7490 | 0.9656 |
| 0.004 | 20 | 21220 | 0.2677 | 0.7347 | 0.7702 | 0.7520 | 0.9658 | 0.7374 | 0.7097 | 0.7233 | 372 | 0.2857 | 0.4286 | 0.3429 | 28 | 0.8466 | 0.8345 | 0.8405 | 840 | 0.6050 | 0.4832 | 0.5373 | 149 | 0.6667 | 0.6136 | 0.6391 | 88 | 0.6593 | 0.7131 | 0.6852 | 589 | 0.8240 | 0.8975 | 0.8591 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.4981 | 0.5837 | 0.5375 | 221 | 0.5102 | 0.5556 | 0.5319 | 45 | 0.6371 | 0.8272 | 0.7198 | 191 | 0.7347 | 0.7702 | 0.7520 | 0.9658 |
| 0.0034 | 21 | 22281 | 0.2743 | 0.7386 | 0.7717 | 0.7548 | 0.9657 | 0.6984 | 0.7097 | 0.704 | 372 | 0.3784 | 0.5 | 0.4308 | 28 | 0.8475 | 0.8333 | 0.8403 | 840 | 0.6333 | 0.5101 | 0.5651 | 149 | 0.6190 | 0.5909 | 0.6047 | 88 | 0.6512 | 0.7385 | 0.6921 | 589 | 0.8428 | 0.8921 | 0.8668 | 751 | 0.9846 | 0.8767 | 0.9275 | 73 | 0.5513 | 0.5837 | 0.5670 | 221 | 0.5106 | 0.5333 | 0.5217 | 45 | 0.6379 | 0.8115 | 0.7143 | 191 | 0.7386 | 0.7717 | 0.7548 | 0.9657 |
| 0.0033 | 22 | 23342 | 0.2788 | 0.7418 | 0.7520 | 0.7469 | 0.9652 | 0.7143 | 0.6989 | 0.7065 | 372 | 0.3182 | 0.5 | 0.3889 | 28 | 0.8367 | 0.8298 | 0.8332 | 840 | 0.6168 | 0.4430 | 0.5156 | 149 | 0.6235 | 0.6023 | 0.6127 | 88 | 0.6758 | 0.6689 | 0.6724 | 589 | 0.8327 | 0.8815 | 0.8564 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5458 | 0.5928 | 0.5683 | 221 | 0.4783 | 0.4889 | 0.4835 | 45 | 0.6637 | 0.7853 | 0.7194 | 191 | 0.7418 | 0.7520 | 0.7469 | 0.9652 |
| 0.0033 | 23 | 24403 | 0.2831 | 0.7342 | 0.7535 | 0.7437 | 0.9650 | 0.6981 | 0.6962 | 0.6972 | 372 | 0.3784 | 0.5 | 0.4308 | 28 | 0.8499 | 0.8024 | 0.8255 | 840 | 0.5034 | 0.4966 | 0.5 | 149 | 0.6067 | 0.6136 | 0.6102 | 88 | 0.6581 | 0.6961 | 0.6766 | 589 | 0.8350 | 0.8961 | 0.8645 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5424 | 0.5792 | 0.5602 | 221 | 0.3774 | 0.4444 | 0.4082 | 45 | 0.7048 | 0.7749 | 0.7382 | 191 | 0.7342 | 0.7535 | 0.7437 | 0.9650 |
| 0.0029 | 24 | 25464 | 0.2931 | 0.7544 | 0.7380 | 0.7461 | 0.9648 | 0.7365 | 0.6989 | 0.7172 | 372 | 0.3590 | 0.5 | 0.4179 | 28 | 0.8535 | 0.7976 | 0.8246 | 840 | 0.5849 | 0.4161 | 0.4863 | 149 | 0.6622 | 0.5568 | 0.6049 | 88 | 0.6672 | 0.6706 | 0.6689 | 589 | 0.8474 | 0.8802 | 0.8635 | 751 | 0.9701 | 0.8904 | 0.9286 | 73 | 0.5550 | 0.5475 | 0.5513 | 221 | 0.4889 | 0.4889 | 0.4889 | 45 | 0.7023 | 0.7906 | 0.7438 | 191 | 0.7544 | 0.7380 | 0.7461 | 0.9648 |
| 0.0028 | 25 | 26525 | 0.2899 | 0.7489 | 0.7574 | 0.7531 | 0.9654 | 0.7021 | 0.7097 | 0.7059 | 372 | 0.3902 | 0.5714 | 0.4638 | 28 | 0.8635 | 0.8131 | 0.8375 | 840 | 0.6182 | 0.4564 | 0.5251 | 149 | 0.6471 | 0.625 | 0.6358 | 88 | 0.6613 | 0.6995 | 0.6799 | 589 | 0.8454 | 0.9028 | 0.8731 | 751 | 0.9583 | 0.9452 | 0.9517 | 73 | 0.5681 | 0.5475 | 0.5576 | 221 | 0.4222 | 0.4222 | 0.4222 | 45 | 0.6608 | 0.7853 | 0.7177 | 191 | 0.7489 | 0.7574 | 0.7531 | 0.9654 |
| 0.0023 | 26 | 27586 | 0.2922 | 0.7413 | 0.7532 | 0.7472 | 0.9649 | 0.6897 | 0.6989 | 0.6943 | 372 | 0.35 | 0.5 | 0.4118 | 28 | 0.85 | 0.8298 | 0.8398 | 840 | 0.6161 | 0.4631 | 0.5287 | 149 | 0.6486 | 0.5455 | 0.5926 | 88 | 0.6486 | 0.6927 | 0.6700 | 589 | 0.8457 | 0.8828 | 0.8638 | 751 | 0.9853 | 0.9178 | 0.9504 | 73 | 0.5636 | 0.5611 | 0.5624 | 221 | 0.3958 | 0.4222 | 0.4086 | 45 | 0.6638 | 0.7958 | 0.7238 | 191 | 0.7413 | 0.7532 | 0.7472 | 0.9649 |
| 0.0021 | 27 | 28647 | 0.2967 | 0.7514 | 0.7568 | 0.7541 | 0.9656 | 0.7081 | 0.7043 | 0.7062 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8547 | 0.8190 | 0.8365 | 840 | 0.5641 | 0.4430 | 0.4962 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6677 | 0.7097 | 0.6881 | 589 | 0.8459 | 0.8842 | 0.8646 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5806 | 0.5701 | 0.5753 | 221 | 0.4898 | 0.5333 | 0.5106 | 45 | 0.7089 | 0.7906 | 0.7475 | 191 | 0.7514 | 0.7568 | 0.7541 | 0.9656 |
| 0.0025 | 28 | 29708 | 0.2957 | 0.7335 | 0.7622 | 0.7475 | 0.9651 | 0.7060 | 0.7231 | 0.7145 | 372 | 0.3077 | 0.4286 | 0.3582 | 28 | 0.8459 | 0.8429 | 0.8444 | 840 | 0.5069 | 0.4899 | 0.4983 | 149 | 0.6438 | 0.5341 | 0.5839 | 88 | 0.6838 | 0.7012 | 0.6924 | 589 | 0.8413 | 0.8895 | 0.8647 | 751 | 0.9552 | 0.8767 | 0.9143 | 73 | 0.4901 | 0.5611 | 0.5232 | 221 | 0.3818 | 0.4667 | 0.42 | 45 | 0.6580 | 0.7958 | 0.7204 | 191 | 0.7335 | 0.7622 | 0.7475 | 0.9651 |
| 0.0023 | 29 | 30769 | 0.3049 | 0.7455 | 0.7544 | 0.7499 | 0.9654 | 0.6997 | 0.7392 | 0.7190 | 372 | 0.3182 | 0.5 | 0.3889 | 28 | 0.8483 | 0.8119 | 0.8297 | 840 | 0.5630 | 0.5101 | 0.5352 | 149 | 0.6579 | 0.5682 | 0.6098 | 88 | 0.6791 | 0.7114 | 0.6949 | 589 | 0.8583 | 0.8628 | 0.8606 | 751 | 0.9853 | 0.9178 | 0.9504 | 73 | 0.5234 | 0.5566 | 0.5395 | 221 | 0.4565 | 0.4667 | 0.4615 | 45 | 0.7009 | 0.7853 | 0.7407 | 191 | 0.7455 | 0.7544 | 0.7499 | 0.9654 |
| 0.0018 | 30 | 31830 | 0.3042 | 0.7415 | 0.7679 | 0.7544 | 0.9654 | 0.6935 | 0.7419 | 0.7169 | 372 | 0.3333 | 0.5 | 0.4 | 28 | 0.8563 | 0.8226 | 0.8391 | 840 | 0.5878 | 0.5168 | 0.55 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6677 | 0.7470 | 0.7051 | 589 | 0.8544 | 0.8828 | 0.8684 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5300 | 0.5204 | 0.5251 | 221 | 0.4375 | 0.4667 | 0.4516 | 45 | 0.6417 | 0.8063 | 0.7146 | 191 | 0.7415 | 0.7679 | 0.7544 | 0.9654 |
| 0.0017 | 31 | 32891 | 0.3071 | 0.7540 | 0.7481 | 0.7510 | 0.9660 | 0.7083 | 0.7312 | 0.7196 | 372 | 0.4054 | 0.5357 | 0.4615 | 28 | 0.8552 | 0.8226 | 0.8386 | 840 | 0.6311 | 0.4362 | 0.5159 | 149 | 0.6220 | 0.5795 | 0.6 | 88 | 0.6734 | 0.6757 | 0.6746 | 589 | 0.8626 | 0.8775 | 0.8700 | 751 | 0.9855 | 0.9315 | 0.9577 | 73 | 0.5307 | 0.5475 | 0.5390 | 221 | 0.3830 | 0.4 | 0.3913 | 45 | 0.7019 | 0.7644 | 0.7318 | 191 | 0.7540 | 0.7481 | 0.7510 | 0.9660 |
| 0.0018 | 32 | 33952 | 0.3190 | 0.7499 | 0.7553 | 0.7526 | 0.9656 | 0.7182 | 0.7124 | 0.7152 | 372 | 0.3333 | 0.5357 | 0.4110 | 28 | 0.8586 | 0.7952 | 0.8257 | 840 | 0.6116 | 0.4966 | 0.5481 | 149 | 0.6463 | 0.6023 | 0.6235 | 88 | 0.6805 | 0.6978 | 0.6890 | 589 | 0.8360 | 0.8895 | 0.8619 | 751 | 0.9855 | 0.9315 | 0.9577 | 73 | 0.5633 | 0.5837 | 0.5733 | 221 | 0.5106 | 0.5333 | 0.5217 | 45 | 0.6711 | 0.8010 | 0.7303 | 191 | 0.7499 | 0.7553 | 0.7526 | 0.9656 |
| 0.0018 | 33 | 35013 | 0.3094 | 0.7460 | 0.7774 | 0.7614 | 0.9665 | 0.7147 | 0.7473 | 0.7306 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8556 | 0.8393 | 0.8474 | 840 | 0.6273 | 0.4631 | 0.5328 | 149 | 0.6506 | 0.6136 | 0.6316 | 88 | 0.6787 | 0.7351 | 0.7058 | 589 | 0.8344 | 0.8988 | 0.8654 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5702 | 0.6063 | 0.5877 | 221 | 0.3036 | 0.3778 | 0.3366 | 45 | 0.6567 | 0.8010 | 0.7217 | 191 | 0.7460 | 0.7774 | 0.7614 | 0.9665 |
| 0.0015 | 34 | 36074 | 0.3091 | 0.7441 | 0.7759 | 0.7597 | 0.9665 | 0.7113 | 0.7285 | 0.7198 | 372 | 0.3404 | 0.5714 | 0.4267 | 28 | 0.8266 | 0.8512 | 0.8387 | 840 | 0.5405 | 0.5369 | 0.5387 | 149 | 0.6707 | 0.625 | 0.6471 | 88 | 0.6856 | 0.7182 | 0.7015 | 589 | 0.8517 | 0.8868 | 0.8689 | 751 | 1.0 | 0.9452 | 0.9718 | 73 | 0.5752 | 0.5882 | 0.5817 | 221 | 0.3878 | 0.4222 | 0.4043 | 45 | 0.6830 | 0.8010 | 0.7373 | 191 | 0.7441 | 0.7759 | 0.7597 | 0.9665 |
| 0.0015 | 35 | 37135 | 0.3185 | 0.7487 | 0.7619 | 0.7552 | 0.9660 | 0.6982 | 0.7339 | 0.7156 | 372 | 0.3415 | 0.5 | 0.4058 | 28 | 0.8685 | 0.8179 | 0.8424 | 840 | 0.5504 | 0.4765 | 0.5108 | 149 | 0.6353 | 0.6136 | 0.6243 | 88 | 0.6636 | 0.7267 | 0.6937 | 589 | 0.8654 | 0.8815 | 0.8734 | 751 | 1.0 | 0.9315 | 0.9645 | 73 | 0.55 | 0.5475 | 0.5488 | 221 | 0.3673 | 0.4 | 0.3830 | 45 | 0.6937 | 0.8063 | 0.7458 | 191 | 0.7487 | 0.7619 | 0.7552 | 0.9660 |
| 0.0015 | 36 | 38196 | 0.3203 | 0.7438 | 0.7649 | 0.7542 | 0.9660 | 0.6961 | 0.7204 | 0.7081 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8617 | 0.8381 | 0.8497 | 840 | 0.5203 | 0.5168 | 0.5185 | 149 | 0.6667 | 0.5909 | 0.6265 | 88 | 0.6710 | 0.7063 | 0.6882 | 589 | 0.8495 | 0.8868 | 0.8678 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5561 | 0.5385 | 0.5471 | 221 | 0.42 | 0.4667 | 0.4421 | 45 | 0.6568 | 0.8115 | 0.7260 | 191 | 0.7438 | 0.7649 | 0.7542 | 0.9660 |
| 0.0013 | 37 | 39257 | 0.3298 | 0.7315 | 0.7732 | 0.7518 | 0.9656 | 0.6915 | 0.7231 | 0.7070 | 372 | 0.3333 | 0.5714 | 0.4211 | 28 | 0.8654 | 0.8190 | 0.8416 | 840 | 0.4793 | 0.5436 | 0.5094 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6656 | 0.7267 | 0.6948 | 589 | 0.8289 | 0.9028 | 0.8642 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5574 | 0.5928 | 0.5746 | 221 | 0.4043 | 0.4222 | 0.4130 | 45 | 0.6408 | 0.8220 | 0.7202 | 191 | 0.7315 | 0.7732 | 0.7518 | 0.9656 |
| 0.0012 | 38 | 40318 | 0.3311 | 0.7533 | 0.7610 | 0.7571 | 0.9664 | 0.7060 | 0.7231 | 0.7145 | 372 | 0.3571 | 0.5357 | 0.4286 | 28 | 0.8613 | 0.8357 | 0.8483 | 840 | 0.6339 | 0.4765 | 0.5441 | 149 | 0.6543 | 0.6023 | 0.6272 | 88 | 0.6528 | 0.7182 | 0.6839 | 589 | 0.8424 | 0.8828 | 0.8622 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.6031 | 0.5294 | 0.5639 | 221 | 0.4130 | 0.4222 | 0.4176 | 45 | 0.7122 | 0.7644 | 0.7374 | 191 | 0.7533 | 0.7610 | 0.7571 | 0.9664 |
| 0.0012 | 39 | 41379 | 0.3328 | 0.7444 | 0.7553 | 0.7498 | 0.9657 | 0.6818 | 0.7258 | 0.7031 | 372 | 0.3478 | 0.5714 | 0.4324 | 28 | 0.8561 | 0.8143 | 0.8347 | 840 | 0.6055 | 0.4430 | 0.5116 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6715 | 0.7046 | 0.6877 | 589 | 0.8461 | 0.8708 | 0.8583 | 751 | 0.9706 | 0.9041 | 0.9362 | 73 | 0.5665 | 0.5973 | 0.5815 | 221 | 0.4082 | 0.4444 | 0.4255 | 45 | 0.6770 | 0.8010 | 0.7338 | 191 | 0.7444 | 0.7553 | 0.7498 | 0.9657 |
| 0.0014 | 40 | 42440 | 0.3415 | 0.7421 | 0.7437 | 0.7429 | 0.9641 | 0.6931 | 0.7043 | 0.6987 | 372 | 0.3488 | 0.5357 | 0.4225 | 28 | 0.8422 | 0.8262 | 0.8341 | 840 | 0.6190 | 0.4362 | 0.5118 | 149 | 0.6622 | 0.5568 | 0.6049 | 88 | 0.6888 | 0.6350 | 0.6608 | 589 | 0.8175 | 0.8828 | 0.8489 | 751 | 1.0 | 0.9178 | 0.9571 | 73 | 0.5584 | 0.5837 | 0.5708 | 221 | 0.4043 | 0.4222 | 0.4130 | 45 | 0.6580 | 0.7958 | 0.7204 | 191 | 0.7421 | 0.7437 | 0.7429 | 0.9641 |
| 0.0013 | 41 | 43501 | 0.3401 | 0.7501 | 0.7487 | 0.7494 | 0.9651 | 0.6915 | 0.7231 | 0.7070 | 372 | 0.3421 | 0.4643 | 0.3939 | 28 | 0.8545 | 0.8179 | 0.8358 | 840 | 0.6346 | 0.4430 | 0.5217 | 149 | 0.6812 | 0.5341 | 0.5987 | 88 | 0.6728 | 0.6808 | 0.6768 | 589 | 0.8380 | 0.8748 | 0.8560 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.5860 | 0.5701 | 0.5780 | 221 | 0.4423 | 0.5111 | 0.4742 | 45 | 0.6787 | 0.7853 | 0.7282 | 191 | 0.7501 | 0.7487 | 0.7494 | 0.9651 |
| 0.0011 | 42 | 44562 | 0.3468 | 0.7426 | 0.7687 | 0.7554 | 0.9650 | 0.6965 | 0.7527 | 0.7235 | 372 | 0.3488 | 0.5357 | 0.4225 | 28 | 0.8667 | 0.8202 | 0.8428 | 840 | 0.6408 | 0.4430 | 0.5238 | 149 | 0.6709 | 0.6023 | 0.6347 | 88 | 0.6902 | 0.7148 | 0.7023 | 589 | 0.8404 | 0.8975 | 0.8680 | 751 | 0.9444 | 0.9315 | 0.9379 | 73 | 0.5191 | 0.6154 | 0.5631 | 221 | 0.3469 | 0.3778 | 0.3617 | 45 | 0.6210 | 0.8063 | 0.7016 | 191 | 0.7426 | 0.7687 | 0.7554 | 0.9650 |
| 0.0015 | 43 | 45623 | 0.3440 | 0.7566 | 0.7422 | 0.7493 | 0.9648 | 0.6937 | 0.7366 | 0.7145 | 372 | 0.3846 | 0.5357 | 0.4478 | 28 | 0.8608 | 0.8095 | 0.8344 | 840 | 0.6082 | 0.3960 | 0.4797 | 149 | 0.7 | 0.5568 | 0.6203 | 88 | 0.6766 | 0.6570 | 0.6667 | 589 | 0.8317 | 0.8881 | 0.8590 | 751 | 0.9701 | 0.8904 | 0.9286 | 73 | 0.6224 | 0.5520 | 0.5851 | 221 | 0.3913 | 0.4 | 0.3956 | 45 | 0.7081 | 0.7749 | 0.74 | 191 | 0.7566 | 0.7422 | 0.7493 | 0.9648 |
| 0.0011 | 44 | 46684 | 0.3354 | 0.7565 | 0.7640 | 0.7602 | 0.9664 | 0.7062 | 0.7366 | 0.7211 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8483 | 0.8452 | 0.8468 | 840 | 0.6095 | 0.4295 | 0.5039 | 149 | 0.6883 | 0.6023 | 0.6424 | 88 | 0.6880 | 0.6740 | 0.6810 | 589 | 0.8517 | 0.8948 | 0.8727 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.6238 | 0.5928 | 0.6079 | 221 | 0.3830 | 0.4 | 0.3913 | 45 | 0.65 | 0.8168 | 0.7239 | 191 | 0.7565 | 0.7640 | 0.7602 | 0.9664 |
| 0.0011 | 45 | 47745 | 0.3347 | 0.7485 | 0.7622 | 0.7553 | 0.9655 | 0.7088 | 0.7392 | 0.7237 | 372 | 0.3636 | 0.5714 | 0.4444 | 28 | 0.8603 | 0.8286 | 0.8441 | 840 | 0.5882 | 0.4698 | 0.5224 | 149 | 0.6023 | 0.6023 | 0.6023 | 88 | 0.6770 | 0.6689 | 0.6729 | 589 | 0.8417 | 0.8921 | 0.8662 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.6037 | 0.5928 | 0.5982 | 221 | 0.4583 | 0.4889 | 0.4731 | 45 | 0.6275 | 0.8115 | 0.7078 | 191 | 0.7485 | 0.7622 | 0.7553 | 0.9655 |
| 0.0011 | 46 | 48806 | 0.3421 | 0.7481 | 0.7640 | 0.7559 | 0.9657 | 0.7261 | 0.7339 | 0.7299 | 372 | 0.3171 | 0.4643 | 0.3768 | 28 | 0.8570 | 0.8202 | 0.8382 | 840 | 0.5691 | 0.4698 | 0.5147 | 149 | 0.6429 | 0.6136 | 0.6279 | 88 | 0.6769 | 0.7114 | 0.6937 | 589 | 0.8311 | 0.8908 | 0.8599 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5714 | 0.5611 | 0.5662 | 221 | 0.5 | 0.5556 | 0.5263 | 45 | 0.6638 | 0.7958 | 0.7238 | 191 | 0.7481 | 0.7640 | 0.7559 | 0.9657 |
| 0.0009 | 47 | 49867 | 0.3487 | 0.7496 | 0.7604 | 0.7550 | 0.9656 | 0.7158 | 0.7043 | 0.7100 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.86 | 0.8190 | 0.8390 | 840 | 0.5496 | 0.4832 | 0.5143 | 149 | 0.7162 | 0.6023 | 0.6543 | 88 | 0.6745 | 0.7284 | 0.7004 | 589 | 0.8346 | 0.8802 | 0.8568 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5566 | 0.5339 | 0.5450 | 221 | 0.5349 | 0.5111 | 0.5227 | 45 | 0.6828 | 0.8115 | 0.7416 | 191 | 0.7496 | 0.7604 | 0.7550 | 0.9656 |
| 0.0009 | 48 | 50928 | 0.3470 | 0.7414 | 0.7649 | 0.7529 | 0.9651 | 0.7092 | 0.7473 | 0.7277 | 372 | 0.3333 | 0.5357 | 0.4110 | 28 | 0.8541 | 0.8226 | 0.8381 | 840 | 0.5847 | 0.4631 | 0.5169 | 149 | 0.6835 | 0.6136 | 0.6467 | 88 | 0.6801 | 0.7148 | 0.6970 | 589 | 0.8319 | 0.8895 | 0.8597 | 751 | 0.9571 | 0.9178 | 0.9371 | 73 | 0.5307 | 0.5475 | 0.5390 | 221 | 0.4583 | 0.4889 | 0.4731 | 45 | 0.6364 | 0.8063 | 0.7113 | 191 | 0.7414 | 0.7649 | 0.7529 | 0.9651 |
| 0.0011 | 49 | 51989 | 0.3389 | 0.7435 | 0.7664 | 0.7547 | 0.9659 | 0.6957 | 0.7312 | 0.7130 | 372 | 0.3590 | 0.5 | 0.4179 | 28 | 0.8561 | 0.8286 | 0.8421 | 840 | 0.6636 | 0.4899 | 0.5637 | 149 | 0.6136 | 0.6136 | 0.6136 | 88 | 0.6732 | 0.6995 | 0.6861 | 589 | 0.8251 | 0.8921 | 0.8573 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5746 | 0.5928 | 0.5835 | 221 | 0.4348 | 0.4444 | 0.4396 | 45 | 0.6390 | 0.8063 | 0.7130 | 191 | 0.7435 | 0.7664 | 0.7547 | 0.9659 |
| 0.0009 | 50 | 53050 | 0.3557 | 0.7490 | 0.7640 | 0.7564 | 0.9659 | 0.6948 | 0.6855 | 0.6901 | 372 | 0.3947 | 0.5357 | 0.4545 | 28 | 0.8584 | 0.8298 | 0.8438 | 840 | 0.6455 | 0.4765 | 0.5483 | 149 | 0.6933 | 0.5909 | 0.6380 | 88 | 0.6745 | 0.7317 | 0.7020 | 589 | 0.8296 | 0.8948 | 0.8610 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.6082 | 0.5339 | 0.5687 | 221 | 0.4043 | 0.4222 | 0.4130 | 45 | 0.6270 | 0.8272 | 0.7133 | 191 | 0.7490 | 0.7640 | 0.7564 | 0.9659 |
| 0.0008 | 51 | 54111 | 0.3492 | 0.7516 | 0.7601 | 0.7558 | 0.9662 | 0.7104 | 0.6989 | 0.7046 | 372 | 0.3714 | 0.4643 | 0.4127 | 28 | 0.8545 | 0.8321 | 0.8432 | 840 | 0.6496 | 0.5101 | 0.5714 | 149 | 0.625 | 0.5682 | 0.5952 | 88 | 0.6722 | 0.6893 | 0.6806 | 589 | 0.8413 | 0.8895 | 0.8647 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5611 | 0.5611 | 0.5611 | 221 | 0.4792 | 0.5111 | 0.4946 | 45 | 0.6724 | 0.8168 | 0.7376 | 191 | 0.7516 | 0.7601 | 0.7558 | 0.9662 |
| 0.0008 | 52 | 55172 | 0.3432 | 0.7526 | 0.7625 | 0.7575 | 0.9661 | 0.7044 | 0.7366 | 0.7201 | 372 | 0.3571 | 0.5357 | 0.4286 | 28 | 0.8610 | 0.8262 | 0.8433 | 840 | 0.6140 | 0.4698 | 0.5323 | 149 | 0.6667 | 0.5909 | 0.6265 | 88 | 0.6766 | 0.6927 | 0.6846 | 589 | 0.8403 | 0.8895 | 0.8642 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5849 | 0.5611 | 0.5727 | 221 | 0.46 | 0.5111 | 0.4842 | 45 | 0.6681 | 0.8115 | 0.7329 | 191 | 0.7526 | 0.7625 | 0.7575 | 0.9661 |
| **0.0006** | **53** | **56233** | **0.3565** | **0.7615** | **0.7747** | **0.7681** | **0.9672** | **0.7305** | **0.7285** | **0.7295** | **372** | **0.3721** | **0.5714** | **0.4507** | **28** | **0.8679** | **0.8369** | **0.8521** | **840** | **0.6545** | **0.4832** | **0.5560** | **149** | **0.6625** | **0.6023** | **0.6310** | **88** | **0.6761** | **0.7267** | **0.7005** | **589** | **0.8255** | **0.9068** | **0.8642** | **751** | **1.0** | **0.9589** | **0.9790** | **73** | **0.6030** | **0.5430** | **0.5714** | **221** | **0.5682** | **0.5556** | **0.5618** | **45** | **0.7** | **0.8063** | **0.7494** | **191** | **0.7615** | **0.7747** | **0.7681** | **0.9672** |
| 0.0008 | 54 | 57294 | 0.3480 | 0.7590 | 0.7631 | 0.7610 | 0.9668 | 0.7452 | 0.7312 | 0.7381 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8589 | 0.8190 | 0.8385 | 840 | 0.5935 | 0.4899 | 0.5368 | 149 | 0.7027 | 0.5909 | 0.6420 | 88 | 0.6924 | 0.6842 | 0.6883 | 589 | 0.8432 | 0.8948 | 0.8682 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5856 | 0.5882 | 0.5869 | 221 | 0.5102 | 0.5556 | 0.5319 | 45 | 0.6513 | 0.8115 | 0.7226 | 191 | 0.7590 | 0.7631 | 0.7610 | 0.9668 |
| 0.0008 | 55 | 58355 | 0.3568 | 0.7601 | 0.7622 | 0.7612 | 0.9663 | 0.7228 | 0.7151 | 0.7189 | 372 | 0.3571 | 0.5357 | 0.4286 | 28 | 0.8429 | 0.8429 | 0.8429 | 840 | 0.6634 | 0.4497 | 0.536 | 149 | 0.7 | 0.5568 | 0.6203 | 88 | 0.6828 | 0.7165 | 0.6993 | 589 | 0.8655 | 0.8828 | 0.8741 | 751 | 0.9853 | 0.9178 | 0.9504 | 73 | 0.5909 | 0.5294 | 0.5585 | 221 | 0.5106 | 0.5333 | 0.5217 | 45 | 0.6429 | 0.8010 | 0.7133 | 191 | 0.7601 | 0.7622 | 0.7612 | 0.9663 |
| 0.0009 | 56 | 59416 | 0.3498 | 0.7542 | 0.7580 | 0.7561 | 0.9661 | 0.7178 | 0.7043 | 0.7110 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8379 | 0.8429 | 0.8404 | 840 | 0.6634 | 0.4497 | 0.536 | 149 | 0.6322 | 0.625 | 0.6286 | 88 | 0.6895 | 0.6825 | 0.6860 | 589 | 0.8513 | 0.8842 | 0.8674 | 751 | 0.9577 | 0.9315 | 0.9444 | 73 | 0.5613 | 0.5385 | 0.5497 | 221 | 0.5111 | 0.5111 | 0.5111 | 45 | 0.6667 | 0.8063 | 0.7299 | 191 | 0.7542 | 0.7580 | 0.7561 | 0.9661 |
| 0.0007 | 57 | 60477 | 0.3486 | 0.7479 | 0.7711 | 0.7593 | 0.9663 | 0.7143 | 0.7392 | 0.7266 | 372 | 0.3571 | 0.5357 | 0.4286 | 28 | 0.8417 | 0.8417 | 0.8417 | 840 | 0.5923 | 0.5168 | 0.5520 | 149 | 0.6667 | 0.6136 | 0.6391 | 88 | 0.6720 | 0.7165 | 0.6935 | 589 | 0.8562 | 0.8802 | 0.8680 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5670 | 0.5747 | 0.5708 | 221 | 0.4583 | 0.4889 | 0.4731 | 45 | 0.6623 | 0.8010 | 0.7251 | 191 | 0.7479 | 0.7711 | 0.7593 | 0.9663 |
| 0.0007 | 58 | 61538 | 0.3497 | 0.7539 | 0.7744 | 0.7640 | 0.9667 | 0.7143 | 0.7392 | 0.7266 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8449 | 0.8429 | 0.8439 | 840 | 0.6429 | 0.4832 | 0.5517 | 149 | 0.6667 | 0.5909 | 0.6265 | 88 | 0.6708 | 0.7267 | 0.6976 | 589 | 0.8499 | 0.8975 | 0.8731 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.6108 | 0.5611 | 0.5849 | 221 | 0.5 | 0.4889 | 0.4944 | 45 | 0.6525 | 0.8063 | 0.7213 | 191 | 0.7539 | 0.7744 | 0.7640 | 0.9667 |
| 0.0008 | 59 | 62599 | 0.3581 | 0.7474 | 0.7762 | 0.7615 | 0.9662 | 0.7183 | 0.7473 | 0.7325 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8439 | 0.8429 | 0.8434 | 840 | 0.5467 | 0.5503 | 0.5485 | 149 | 0.6709 | 0.6023 | 0.6347 | 88 | 0.6693 | 0.7250 | 0.6960 | 589 | 0.8454 | 0.8881 | 0.8662 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5961 | 0.5475 | 0.5708 | 221 | 0.5 | 0.5333 | 0.5161 | 45 | 0.6769 | 0.8115 | 0.7381 | 191 | 0.7474 | 0.7762 | 0.7615 | 0.9662 |
| 0.0007 | 60 | 63660 | 0.3636 | 0.7494 | 0.7676 | 0.7584 | 0.9662 | 0.7016 | 0.7204 | 0.7109 | 372 | 0.3488 | 0.5357 | 0.4225 | 28 | 0.8489 | 0.8357 | 0.8422 | 840 | 0.6 | 0.4832 | 0.5353 | 149 | 0.6538 | 0.5795 | 0.6145 | 88 | 0.6828 | 0.7199 | 0.7008 | 589 | 0.8476 | 0.8815 | 0.8642 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5579 | 0.5882 | 0.5727 | 221 | 0.4762 | 0.4444 | 0.4598 | 45 | 0.6797 | 0.8220 | 0.7441 | 191 | 0.7494 | 0.7676 | 0.7584 | 0.9662 |
| 0.0008 | 61 | 64721 | 0.3646 | 0.7538 | 0.7574 | 0.7556 | 0.9660 | 0.6854 | 0.7204 | 0.7025 | 372 | 0.3659 | 0.5357 | 0.4348 | 28 | 0.8573 | 0.8369 | 0.8470 | 840 | 0.6306 | 0.4698 | 0.5385 | 149 | 0.6667 | 0.5909 | 0.6265 | 88 | 0.6896 | 0.6978 | 0.6937 | 589 | 0.8495 | 0.8722 | 0.8607 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5728 | 0.5520 | 0.5622 | 221 | 0.375 | 0.4 | 0.3871 | 45 | 0.6830 | 0.8010 | 0.7373 | 191 | 0.7538 | 0.7574 | 0.7556 | 0.9660 |
| 0.0006 | 62 | 65782 | 0.3697 | 0.7510 | 0.7460 | 0.7485 | 0.9651 | 0.6885 | 0.7070 | 0.6976 | 372 | 0.4286 | 0.5357 | 0.4762 | 28 | 0.8663 | 0.7869 | 0.8247 | 840 | 0.5902 | 0.4832 | 0.5314 | 149 | 0.6757 | 0.5682 | 0.6173 | 88 | 0.6667 | 0.6927 | 0.6794 | 589 | 0.8432 | 0.8948 | 0.8682 | 751 | 0.9851 | 0.9041 | 0.9429 | 73 | 0.5829 | 0.5566 | 0.5694 | 221 | 0.3673 | 0.4 | 0.3830 | 45 | 0.6995 | 0.7801 | 0.7376 | 191 | 0.7510 | 0.7460 | 0.7485 | 0.9651 |
| 0.0006 | 63 | 66843 | 0.3661 | 0.7504 | 0.7502 | 0.7503 | 0.9655 | 0.6909 | 0.6909 | 0.6909 | 372 | 0.4286 | 0.5357 | 0.4762 | 28 | 0.8571 | 0.8143 | 0.8352 | 840 | 0.5814 | 0.5034 | 0.5396 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.7013 | 0.6655 | 0.6829 | 589 | 0.8348 | 0.8948 | 0.8638 | 751 | 0.9571 | 0.9178 | 0.9371 | 73 | 0.5570 | 0.5747 | 0.5657 | 221 | 0.3830 | 0.4 | 0.3913 | 45 | 0.6786 | 0.7958 | 0.7325 | 191 | 0.7504 | 0.7502 | 0.7503 | 0.9655 |
| 0.0006 | 64 | 67904 | 0.3711 | 0.7404 | 0.7628 | 0.7514 | 0.9656 | 0.6911 | 0.7097 | 0.7003 | 372 | 0.3784 | 0.5 | 0.4308 | 28 | 0.8455 | 0.8405 | 0.8430 | 840 | 0.6 | 0.5034 | 0.5474 | 149 | 0.65 | 0.5909 | 0.6190 | 88 | 0.6667 | 0.7029 | 0.6843 | 589 | 0.8350 | 0.8961 | 0.8645 | 751 | 0.9714 | 0.9315 | 0.9510 | 73 | 0.5673 | 0.5339 | 0.5501 | 221 | 0.2917 | 0.3111 | 0.3011 | 45 | 0.6568 | 0.8115 | 0.7260 | 191 | 0.7404 | 0.7628 | 0.7514 | 0.9656 |
| 0.0007 | 65 | 68965 | 0.3672 | 0.7377 | 0.7696 | 0.7533 | 0.9661 | 0.7005 | 0.7419 | 0.7206 | 372 | 0.3333 | 0.5357 | 0.4110 | 28 | 0.8433 | 0.8393 | 0.8413 | 840 | 0.5839 | 0.5369 | 0.5594 | 149 | 0.6506 | 0.6136 | 0.6316 | 88 | 0.6840 | 0.7131 | 0.6983 | 589 | 0.8412 | 0.8815 | 0.8609 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.5427 | 0.5747 | 0.5582 | 221 | 0.3019 | 0.3556 | 0.3265 | 45 | 0.6360 | 0.7958 | 0.7070 | 191 | 0.7377 | 0.7696 | 0.7533 | 0.9661 |
| 0.0005 | 66 | 70026 | 0.3768 | 0.7496 | 0.7520 | 0.7508 | 0.9657 | 0.6903 | 0.7070 | 0.6985 | 372 | 0.3415 | 0.5 | 0.4058 | 28 | 0.8454 | 0.8333 | 0.8393 | 840 | 0.6372 | 0.4832 | 0.5496 | 149 | 0.6795 | 0.6023 | 0.6386 | 88 | 0.6914 | 0.6655 | 0.6782 | 589 | 0.8483 | 0.8788 | 0.8633 | 751 | 0.9577 | 0.9315 | 0.9444 | 73 | 0.5714 | 0.5792 | 0.5753 | 221 | 0.3 | 0.3333 | 0.3158 | 45 | 0.6696 | 0.7958 | 0.7273 | 191 | 0.7496 | 0.7520 | 0.7508 | 0.9657 |
| 0.0007 | 67 | 71087 | 0.3682 | 0.7461 | 0.7664 | 0.7561 | 0.9656 | 0.7094 | 0.7285 | 0.7188 | 372 | 0.3409 | 0.5357 | 0.4167 | 28 | 0.8563 | 0.8369 | 0.8465 | 840 | 0.6290 | 0.5235 | 0.5714 | 149 | 0.6974 | 0.6023 | 0.6463 | 88 | 0.6935 | 0.6876 | 0.6905 | 589 | 0.8363 | 0.8842 | 0.8595 | 751 | 0.9437 | 0.9178 | 0.9306 | 73 | 0.5175 | 0.6018 | 0.5565 | 221 | 0.4694 | 0.5111 | 0.4894 | 45 | 0.6483 | 0.8010 | 0.7166 | 191 | 0.7461 | 0.7664 | 0.7561 | 0.9656 |
| 0.0005 | 68 | 72148 | 0.3815 | 0.7590 | 0.7416 | 0.7502 | 0.9654 | 0.7092 | 0.7016 | 0.7054 | 372 | 0.4054 | 0.5357 | 0.4615 | 28 | 0.8489 | 0.8095 | 0.8288 | 840 | 0.6796 | 0.4698 | 0.5556 | 149 | 0.6456 | 0.5795 | 0.6108 | 88 | 0.6801 | 0.6570 | 0.6684 | 589 | 0.8476 | 0.8815 | 0.8642 | 751 | 0.9571 | 0.9178 | 0.9371 | 73 | 0.615 | 0.5566 | 0.5843 | 221 | 0.4348 | 0.4444 | 0.4396 | 45 | 0.6759 | 0.7644 | 0.7174 | 191 | 0.7590 | 0.7416 | 0.7502 | 0.9654 |
| 0.0006 | 69 | 73209 | 0.3919 | 0.7494 | 0.7487 | 0.7491 | 0.9650 | 0.6888 | 0.6962 | 0.6925 | 372 | 0.3590 | 0.5 | 0.4179 | 28 | 0.8416 | 0.8095 | 0.8252 | 840 | 0.5865 | 0.5235 | 0.5532 | 149 | 0.6901 | 0.5568 | 0.6164 | 88 | 0.6950 | 0.6808 | 0.6878 | 589 | 0.8490 | 0.8908 | 0.8694 | 751 | 1.0 | 0.9041 | 0.9496 | 73 | 0.5662 | 0.5611 | 0.5636 | 221 | 0.3265 | 0.3556 | 0.3404 | 45 | 0.6881 | 0.7853 | 0.7335 | 191 | 0.7494 | 0.7487 | 0.7491 | 0.9650 |
| 0.0006 | 70 | 74270 | 0.3704 | 0.7587 | 0.7619 | 0.7603 | 0.9666 | 0.6891 | 0.7151 | 0.7018 | 372 | 0.3947 | 0.5357 | 0.4545 | 28 | 0.8376 | 0.8536 | 0.8455 | 840 | 0.6697 | 0.4899 | 0.5659 | 149 | 0.6420 | 0.5909 | 0.6154 | 88 | 0.7018 | 0.6791 | 0.6903 | 589 | 0.8491 | 0.8842 | 0.8663 | 751 | 0.9857 | 0.9452 | 0.9650 | 73 | 0.6219 | 0.5656 | 0.5924 | 221 | 0.3913 | 0.4 | 0.3956 | 45 | 0.6802 | 0.7906 | 0.7312 | 191 | 0.7587 | 0.7619 | 0.7603 | 0.9666 |
| 0.0005 | 71 | 75331 | 0.3841 | 0.7501 | 0.7634 | 0.7567 | 0.9659 | 0.7005 | 0.6855 | 0.6929 | 372 | 0.4054 | 0.5357 | 0.4615 | 28 | 0.8531 | 0.8298 | 0.8413 | 840 | 0.6293 | 0.4899 | 0.5509 | 149 | 0.6410 | 0.5682 | 0.6024 | 88 | 0.6774 | 0.7165 | 0.6964 | 589 | 0.8264 | 0.9001 | 0.8617 | 751 | 0.9706 | 0.9041 | 0.9362 | 73 | 0.5882 | 0.5882 | 0.5882 | 221 | 0.4545 | 0.4444 | 0.4494 | 45 | 0.6864 | 0.7906 | 0.7348 | 191 | 0.7501 | 0.7634 | 0.7567 | 0.9659 |
| 0.0005 | 72 | 76392 | 0.3830 | 0.7605 | 0.7496 | 0.7550 | 0.9655 | 0.7036 | 0.6828 | 0.6930 | 372 | 0.3824 | 0.4643 | 0.4194 | 28 | 0.8618 | 0.8238 | 0.8424 | 840 | 0.6542 | 0.4698 | 0.5469 | 149 | 0.6582 | 0.5909 | 0.6228 | 88 | 0.6935 | 0.6723 | 0.6828 | 589 | 0.8476 | 0.8815 | 0.8642 | 751 | 0.9577 | 0.9315 | 0.9444 | 73 | 0.5830 | 0.5882 | 0.5856 | 221 | 0.4043 | 0.4222 | 0.4130 | 45 | 0.6892 | 0.8010 | 0.7409 | 191 | 0.7605 | 0.7496 | 0.7550 | 0.9655 |
| 0.0006 | 73 | 77453 | 0.3839 | 0.7611 | 0.7547 | 0.7579 | 0.9661 | 0.712 | 0.7177 | 0.7149 | 372 | 0.3429 | 0.4286 | 0.3810 | 28 | 0.8494 | 0.8393 | 0.8443 | 840 | 0.6542 | 0.4698 | 0.5469 | 149 | 0.6538 | 0.5795 | 0.6145 | 88 | 0.6877 | 0.6655 | 0.6764 | 589 | 0.8428 | 0.8921 | 0.8668 | 751 | 0.9710 | 0.9178 | 0.9437 | 73 | 0.6257 | 0.5294 | 0.5735 | 221 | 0.4468 | 0.4667 | 0.4565 | 45 | 0.6814 | 0.8063 | 0.7386 | 191 | 0.7611 | 0.7547 | 0.7579 | 0.9661 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Declan/HuffPost_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-05-03T07:54:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240869504197766
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2236
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3293 | 0.901 | 0.8979 |
| No log | 2.0 | 500 | 0.2236 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Declan/WallStreetJournal_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-05-03T12:07:42Z | 2.5% WER on dev.clean: https://wandb.ai/sanchit-gandhi/flax-wav2vec2-2-bart-large-960h/runs/2lhazd5v |
Declan/test_push | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-05-03T12:31:27Z | ---
language: en
thumbnail: http://www.huggingtweets.com/joejoinerr/1655553718810/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1477268531561517057/MhgifvbO_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Joe 🍞</div>
<div style="text-align: center; font-size: 14px;">@joejoinerr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Joe 🍞.
| Data | Joe 🍞 |
| --- | --- |
| Tweets downloaded | 3176 |
| Retweets | 611 |
| Short tweets | 281 |
| Tweets kept | 2284 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3f3589ez/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @joejoinerr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35u823qi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35u823qi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/joejoinerr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DeltaHub/adapter_t5-3b_mrpc | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: en
license: mit
tags:
- text classification
- fact checking
datasets:
- mwong/climate-evidence-related
widget:
- text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located."
example_title: "Evidence related to claim"
metrics: f1
---
# ClimateErnieV2
ClimateErnieV2 is a classifier model that predicts if evidence is related to query claim. The model achieved F1 score of 97.97% with test dataset "mwong/climate-evidence-related". Using pretrained ernie-v2-base model, the classifier head is trained on Climate Fever dataset. |
DeltaHub/lora_t5-base_mrpc | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-05-03T13:26:14Z | ---
language:
- ru
license: apache-2.0
---
# Model MedRuRobertaLarge
# Model Description
This model is fine-tuned version of [ruRoberta-large](https://huggingface.co/sberbank-ai/ruRoberta-large).
The code for the fine-tuned process can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker/blob/main/spellchecker/ml_ranging/models/med_ru_roberta_large/fine_tune_ru_roberta_large.py).
The model is fine-tuned on a specially collected dataset of over 30,000 medical anamneses in Russian.
The collected dataset can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker/blob/main/data/anamnesis/processed/all_anamnesis.csv).
This model was created as part of a master's project to develop a method for correcting typos
in medical histories using BERT models as a ranking of candidates.
The project is open source and can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker).
# How to Get Started With the Model
You can use the model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> pipeline = pipeline('fill-mask', model='DmitryPogrebnoy/MedRuRobertaLarge')
>>> pipeline("У пациента <mask> боль в грудине.")
[{'score': 0.2467374950647354,
'token': 9233,
'token_str': ' сильный',
'sequence': 'У пациента сильный боль в грудине.'},
{'score': 0.16476310789585114,
'token': 27876,
'token_str': ' постоянный',
'sequence': 'У пациента постоянный боль в грудине.'},
{'score': 0.07211139053106308,
'token': 19551,
'token_str': ' острый',
'sequence': 'У пациента острый боль в грудине.'},
{'score': 0.0616639070212841,
'token': 18840,
'token_str': ' сильная',
'sequence': 'У пациента сильная боль в грудине.'},
{'score': 0.029712719842791557,
'token': 40176,
'token_str': ' острая',
'sequence': 'У пациента острая боль в грудине.'}]
```
Or you can load the model and tokenizer and do what you need to do:
```python
>>> from transformers import AutoTokenizer, AutoModelForMaskedLM
>>> tokenizer = AutoTokenizer.from_pretrained("DmitryPogrebnoy/MedRuRobertaLarge")
>>> model = AutoModelForMaskedLM.from_pretrained("DmitryPogrebnoy/MedRuRobertaLarge")
``` |
DemangeJeremy/4-sentiments-with-flaubert | [
"pytorch",
"flaubert",
"text-classification",
"fr",
"transformers",
"sentiments",
"french",
"flaubert-large"
] | text-classification | {
"architectures": [
"FlaubertForSequenceClassification"
],
"model_type": "flaubert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 226 | 2022-05-03T13:36:31Z | ---
tags:
- conversational
---
# Harry Potter DialoGPT-small Model |
Deniskin/essays_small_2000 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-05-03T14:03:13Z | ---
language:
- vi
tags:
- sentiment
- classification
license: mit
widget:
- text: "Không thể nào đẹp hơn"
- text: "Quá phí tiền, mà không đẹp"
- text: "Cái này giá ổn không nhỉ?"
---
[**GitHub Homepage**](https://github.com/wonrax/phobert-base-vietnamese-sentiment)
A model fine-tuned for sentiment analysis based on [vinai/phobert-base](https://huggingface.co/vinai/phobert-base).
Labels:
- NEG: Negative
- POS: Positive
- NEU: Neutral
Dataset: [30K e-commerce reviews](https://www.kaggle.com/datasets/linhlpv/vietnamese-sentiment-analyst)
## Usage
```python
import torch
from transformers import RobertaForSequenceClassification, AutoTokenizer
model = RobertaForSequenceClassification.from_pretrained("wonrax/phobert-base-vietnamese-sentiment")
tokenizer = AutoTokenizer.from_pretrained("wonrax/phobert-base-vietnamese-sentiment", use_fast=False)
# Just like PhoBERT: INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
sentence = 'Đây là mô_hình rất hay , phù_hợp với điều_kiện và như cầu của nhiều người .'
input_ids = torch.tensor([tokenizer.encode(sentence)])
with torch.no_grad():
out = model(input_ids)
print(out.logits.softmax(dim=-1).tolist())
# Output:
# [[0.002, 0.988, 0.01]]
# ^ ^ ^
# NEG POS NEU
```
|
DeskDown/MarianMix_en-zh_to_vi-ms-hi-ja | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: data2vec-text-base-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5214716883534575
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-cola
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5254
- Matthews Correlation: 0.5215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.160701759709141e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5632 | 1.0 | 535 | 0.5252 | 0.3869 |
| 0.4572 | 2.0 | 1070 | 0.5534 | 0.4758 |
| 0.3905 | 3.0 | 1605 | 0.4962 | 0.5259 |
| 0.3592 | 4.0 | 2140 | 0.5254 | 0.5215 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Devmapall/paraphrase-quora | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 3 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-2
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2605
- Rouge1: 49.3582
- Rouge2: 29.7017
- Rougel: 30.6996
- Rougelsum: 46.3736
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 132 | 1.3168 | 49.5253 | 30.0497 | 31.3982 | 46.9568 | 142.0 |
| No log | 2.0 | 264 | 1.2605 | 49.3582 | 29.7017 | 30.6996 | 46.3736 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DiegoBalam12/institute_classification | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
---
This model can be used to generate an input caption from a SMILES string.
## Example Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-small-smiles2caption", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-small-smiles2caption')
input_text = 'C1=CC2=C(C(=C1)[O-])NC(=CC2=O)C(=O)O'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, num_beams=5, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
DimaOrekhov/cubert-method-name | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2220
- Wer: 0.1301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.9743 | 0.18 | 400 | 2.1457 | 1.0000 |
| 0.5747 | 0.36 | 800 | 0.3415 | 0.3456 |
| 0.3383 | 0.54 | 1200 | 0.2797 | 0.3095 |
| 0.2967 | 0.72 | 1600 | 0.2464 | 0.2568 |
| 0.2747 | 0.9 | 2000 | 0.2341 | 0.2466 |
| 0.2501 | 1.08 | 2400 | 0.2299 | 0.2317 |
| 0.2309 | 1.26 | 2800 | 0.2306 | 0.2328 |
| 0.2273 | 1.44 | 3200 | 0.2212 | 0.2375 |
| 0.225 | 1.62 | 3600 | 0.2193 | 0.2267 |
| 0.2204 | 1.8 | 4000 | 0.2157 | 0.2295 |
| 0.2256 | 1.98 | 4400 | 0.2165 | 0.2260 |
| 0.1941 | 2.17 | 4800 | 0.2105 | 0.2163 |
| 0.1925 | 2.35 | 5200 | 0.2098 | 0.2153 |
| 0.1925 | 2.53 | 5600 | 0.2120 | 0.2148 |
| 0.1952 | 2.71 | 6000 | 0.2063 | 0.2178 |
| 0.1971 | 2.89 | 6400 | 0.2100 | 0.2158 |
| 0.1888 | 3.07 | 6800 | 0.2131 | 0.2172 |
| 0.1702 | 3.25 | 7200 | 0.2155 | 0.2203 |
| 0.173 | 3.43 | 7600 | 0.2141 | 0.2254 |
| 0.174 | 3.61 | 8000 | 0.2017 | 0.2100 |
| 0.1802 | 3.79 | 8400 | 0.1998 | 0.2043 |
| 0.1717 | 3.97 | 8800 | 0.2070 | 0.2110 |
| 0.162 | 4.15 | 9200 | 0.2082 | 0.2157 |
| 0.154 | 4.33 | 9600 | 0.2163 | 0.2161 |
| 0.1598 | 4.51 | 10000 | 0.2070 | 0.2171 |
| 0.1576 | 4.69 | 10400 | 0.2034 | 0.2116 |
| 0.1601 | 4.87 | 10800 | 0.1990 | 0.2009 |
| 0.152 | 5.05 | 11200 | 0.1994 | 0.2039 |
| 0.1395 | 5.23 | 11600 | 0.2013 | 0.2046 |
| 0.1407 | 5.41 | 12000 | 0.2009 | 0.2022 |
| 0.1449 | 5.59 | 12400 | 0.1982 | 0.1961 |
| 0.1483 | 5.77 | 12800 | 0.2082 | 0.2054 |
| 0.1514 | 5.95 | 13200 | 0.1953 | 0.1985 |
| 0.138 | 6.13 | 13600 | 0.2046 | 0.1965 |
| 0.1322 | 6.31 | 14000 | 0.2076 | 0.1948 |
| 0.1372 | 6.5 | 14400 | 0.1968 | 0.1944 |
| 0.136 | 6.68 | 14800 | 0.1971 | 0.1963 |
| 0.1382 | 6.86 | 15200 | 0.2001 | 0.1990 |
| 0.1335 | 7.04 | 15600 | 0.2026 | 0.1935 |
| 0.1206 | 7.22 | 16000 | 0.1986 | 0.1938 |
| 0.1239 | 7.4 | 16400 | 0.2054 | 0.1919 |
| 0.1254 | 7.58 | 16800 | 0.1918 | 0.1939 |
| 0.1262 | 7.76 | 17200 | 0.1960 | 0.1947 |
| 0.126 | 7.94 | 17600 | 0.1932 | 0.1906 |
| 0.1169 | 8.12 | 18000 | 0.2037 | 0.1916 |
| 0.1142 | 8.3 | 18400 | 0.1999 | 0.1900 |
| 0.1151 | 8.48 | 18800 | 0.1920 | 0.1855 |
| 0.1121 | 8.66 | 19200 | 0.2007 | 0.1859 |
| 0.1135 | 8.84 | 19600 | 0.1932 | 0.1879 |
| 0.1158 | 9.02 | 20000 | 0.1916 | 0.1859 |
| 0.105 | 9.2 | 20400 | 0.1961 | 0.1831 |
| 0.1023 | 9.38 | 20800 | 0.1914 | 0.1791 |
| 0.1004 | 9.56 | 21200 | 0.1881 | 0.1787 |
| 0.1023 | 9.74 | 21600 | 0.1963 | 0.1817 |
| 0.1075 | 9.92 | 22000 | 0.1889 | 0.1861 |
| 0.103 | 10.1 | 22400 | 0.1975 | 0.1791 |
| 0.0952 | 10.28 | 22800 | 0.1979 | 0.1787 |
| 0.0957 | 10.46 | 23200 | 0.1922 | 0.1817 |
| 0.0966 | 10.65 | 23600 | 0.1953 | 0.1857 |
| 0.0997 | 10.83 | 24000 | 0.1902 | 0.1783 |
| 0.0981 | 11.01 | 24400 | 0.1959 | 0.1780 |
| 0.0868 | 11.19 | 24800 | 0.2056 | 0.1783 |
| 0.0905 | 11.37 | 25200 | 0.1958 | 0.1777 |
| 0.0892 | 11.55 | 25600 | 0.1935 | 0.1796 |
| 0.0891 | 11.73 | 26000 | 0.1968 | 0.1763 |
| 0.0888 | 11.91 | 26400 | 0.2043 | 0.1804 |
| 0.0842 | 12.09 | 26800 | 0.2043 | 0.1733 |
| 0.0828 | 12.27 | 27200 | 0.1964 | 0.1715 |
| 0.0827 | 12.45 | 27600 | 0.1991 | 0.1749 |
| 0.0844 | 12.63 | 28000 | 0.2014 | 0.1695 |
| 0.0837 | 12.81 | 28400 | 0.1973 | 0.1759 |
| 0.0872 | 12.99 | 28800 | 0.1975 | 0.1689 |
| 0.0778 | 13.17 | 29200 | 0.1979 | 0.1740 |
| 0.0759 | 13.35 | 29600 | 0.2093 | 0.1753 |
| 0.076 | 13.53 | 30000 | 0.1990 | 0.1731 |
| 0.0762 | 13.71 | 30400 | 0.2024 | 0.1690 |
| 0.0764 | 13.89 | 30800 | 0.2037 | 0.1709 |
| 0.0756 | 14.07 | 31200 | 0.2007 | 0.1716 |
| 0.0702 | 14.25 | 31600 | 0.2011 | 0.1680 |
| 0.0694 | 14.43 | 32000 | 0.2061 | 0.1683 |
| 0.0713 | 14.61 | 32400 | 0.2014 | 0.1687 |
| 0.0693 | 14.79 | 32800 | 0.1961 | 0.1658 |
| 0.071 | 14.98 | 33200 | 0.1921 | 0.1645 |
| 0.0659 | 15.16 | 33600 | 0.2079 | 0.1682 |
| 0.0659 | 15.34 | 34000 | 0.2046 | 0.1649 |
| 0.0685 | 15.52 | 34400 | 0.1994 | 0.1660 |
| 0.0663 | 15.7 | 34800 | 0.1970 | 0.1652 |
| 0.0678 | 15.88 | 35200 | 0.1961 | 0.1634 |
| 0.0644 | 16.06 | 35600 | 0.2141 | 0.1644 |
| 0.0596 | 16.24 | 36000 | 0.2098 | 0.1628 |
| 0.0629 | 16.42 | 36400 | 0.1969 | 0.1616 |
| 0.0598 | 16.6 | 36800 | 0.2026 | 0.1604 |
| 0.0628 | 16.78 | 37200 | 0.2050 | 0.1620 |
| 0.0616 | 16.96 | 37600 | 0.1958 | 0.1618 |
| 0.0538 | 17.14 | 38000 | 0.2093 | 0.1588 |
| 0.0573 | 17.32 | 38400 | 0.1995 | 0.1588 |
| 0.0555 | 17.5 | 38800 | 0.2077 | 0.1608 |
| 0.0555 | 17.68 | 39200 | 0.2036 | 0.1571 |
| 0.0578 | 17.86 | 39600 | 0.2045 | 0.1572 |
| 0.056 | 18.04 | 40000 | 0.2065 | 0.1593 |
| 0.0525 | 18.22 | 40400 | 0.2093 | 0.1580 |
| 0.0527 | 18.4 | 40800 | 0.2141 | 0.1585 |
| 0.0529 | 18.58 | 41200 | 0.2137 | 0.1585 |
| 0.0533 | 18.76 | 41600 | 0.2021 | 0.1558 |
| 0.0529 | 18.94 | 42000 | 0.2108 | 0.1535 |
| 0.05 | 19.12 | 42400 | 0.2114 | 0.1555 |
| 0.0479 | 19.31 | 42800 | 0.2091 | 0.1549 |
| 0.0509 | 19.49 | 43200 | 0.2145 | 0.1554 |
| 0.0486 | 19.67 | 43600 | 0.2061 | 0.1536 |
| 0.049 | 19.85 | 44000 | 0.2132 | 0.1548 |
| 0.0484 | 20.03 | 44400 | 0.2077 | 0.1523 |
| 0.0449 | 20.21 | 44800 | 0.2177 | 0.1529 |
| 0.0452 | 20.39 | 45200 | 0.2204 | 0.1517 |
| 0.0477 | 20.57 | 45600 | 0.2132 | 0.1517 |
| 0.048 | 20.75 | 46000 | 0.2119 | 0.1532 |
| 0.0469 | 20.93 | 46400 | 0.2109 | 0.1524 |
| 0.0439 | 21.11 | 46800 | 0.2118 | 0.1503 |
| 0.044 | 21.29 | 47200 | 0.2033 | 0.1474 |
| 0.0435 | 21.47 | 47600 | 0.2066 | 0.1485 |
| 0.0418 | 21.65 | 48000 | 0.2125 | 0.1491 |
| 0.0417 | 21.83 | 48400 | 0.2139 | 0.1487 |
| 0.0446 | 22.01 | 48800 | 0.2054 | 0.1493 |
| 0.039 | 22.19 | 49200 | 0.2179 | 0.1459 |
| 0.0414 | 22.37 | 49600 | 0.2118 | 0.1466 |
| 0.0394 | 22.55 | 50000 | 0.2104 | 0.1444 |
| 0.0381 | 22.73 | 50400 | 0.2095 | 0.1458 |
| 0.0382 | 22.91 | 50800 | 0.2193 | 0.1471 |
| 0.0391 | 23.09 | 51200 | 0.2143 | 0.1455 |
| 0.0365 | 23.27 | 51600 | 0.2198 | 0.1445 |
| 0.0368 | 23.46 | 52000 | 0.2151 | 0.1444 |
| 0.038 | 23.64 | 52400 | 0.2094 | 0.1439 |
| 0.038 | 23.82 | 52800 | 0.2137 | 0.1422 |
| 0.0374 | 24.0 | 53200 | 0.2180 | 0.1425 |
| 0.0352 | 24.18 | 53600 | 0.2207 | 0.1422 |
| 0.0343 | 24.36 | 54000 | 0.2269 | 0.1445 |
| 0.0353 | 24.54 | 54400 | 0.2222 | 0.1438 |
| 0.0348 | 24.72 | 54800 | 0.2224 | 0.1413 |
| 0.0342 | 24.9 | 55200 | 0.2146 | 0.1401 |
| 0.0337 | 25.08 | 55600 | 0.2246 | 0.1408 |
| 0.0327 | 25.26 | 56000 | 0.2161 | 0.1401 |
| 0.0339 | 25.44 | 56400 | 0.2212 | 0.1402 |
| 0.0324 | 25.62 | 56800 | 0.2203 | 0.1394 |
| 0.0319 | 25.8 | 57200 | 0.2145 | 0.1376 |
| 0.0317 | 25.98 | 57600 | 0.2147 | 0.1375 |
| 0.0302 | 26.16 | 58000 | 0.2213 | 0.1362 |
| 0.0309 | 26.34 | 58400 | 0.2218 | 0.1365 |
| 0.0308 | 26.52 | 58800 | 0.2167 | 0.1362 |
| 0.0294 | 26.7 | 59200 | 0.2169 | 0.1368 |
| 0.0297 | 26.88 | 59600 | 0.2163 | 0.1350 |
| 0.0289 | 27.06 | 60000 | 0.2188 | 0.1348 |
| 0.0284 | 27.24 | 60400 | 0.2172 | 0.1338 |
| 0.0278 | 27.42 | 60800 | 0.2230 | 0.1342 |
| 0.0283 | 27.6 | 61200 | 0.2233 | 0.1342 |
| 0.0292 | 27.79 | 61600 | 0.2238 | 0.1335 |
| 0.0286 | 27.97 | 62000 | 0.2218 | 0.1327 |
| 0.0262 | 28.15 | 62400 | 0.2220 | 0.1324 |
| 0.0274 | 28.33 | 62800 | 0.2182 | 0.1323 |
| 0.0279 | 28.51 | 63200 | 0.2170 | 0.1314 |
| 0.0269 | 28.69 | 63600 | 0.2228 | 0.1313 |
| 0.0264 | 28.87 | 64000 | 0.2209 | 0.1313 |
| 0.0254 | 29.05 | 64400 | 0.2224 | 0.1304 |
| 0.026 | 29.23 | 64800 | 0.2220 | 0.1302 |
| 0.0253 | 29.41 | 65200 | 0.2229 | 0.1304 |
| 0.0244 | 29.59 | 65600 | 0.2217 | 0.1298 |
| 0.025 | 29.77 | 66000 | 0.2223 | 0.1303 |
| 0.0255 | 29.95 | 66400 | 0.2220 | 0.1301 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
DimaOrekhov/transformer-method-name | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8654425558524246
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1334
- F1: 0.8654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2541 | 1.0 | 525 | 0.1596 | 0.8242 |
| 0.1284 | 2.0 | 1050 | 0.1360 | 0.8499 |
| 0.0827 | 3.0 | 1575 | 0.1334 | 0.8654 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
DingleyMaillotUrgell/homer-bot | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: apache-2.0
---
This model can be used to generate an input caption from a SMILES string.
## Example Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-large-smiles2caption", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-large-smiles2caption')
input_text = 'C1=CC2=C(C(=C1)[O-])NC(=CC2=O)C(=O)O'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, num_beams=5, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji* |
DivyanshuSheth/T5-Seq2Seq-Final | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: data2vec-text-base-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8627450980392157
- name: F1
type: f1
value: 0.8992805755395683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-mrpc
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4087
- Accuracy: 0.8627
- F1: 0.8993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.486061628311107e-06
- train_batch_size: 4
- eval_batch_size: 16
- seed: 19
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6197 | 1.0 | 917 | 0.4720 | 0.8039 | 0.8606 |
| 0.4763 | 2.0 | 1834 | 0.4087 | 0.8627 | 0.8993 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Dizoid/Lll | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
---
This model can be used to generate a SMILES string from an input caption.
## Example Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-small-caption2smiles", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-small-caption2smiles')
input_text = 'The molecule is a monomethoxybenzene that is 2-methoxyphenol substituted by a hydroxymethyl group at position 4. It has a role as a plant metabolite. It is a member of guaiacols and a member of benzyl alcohols.'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, num_beams=5, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# The model will generate "COC1=C(C=CC(=C1)CCCO)O". The ground-truth is "COC1=C(C=CC(=C1)CO)O".
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
Dmitriiserg/Pxd | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- bleu
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the xtreme_s dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7768
- Bleu: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.5511 | 0.31 | 500 | 5.1039 | 0.0 |
| 2.2033 | 0.62 | 1000 | 4.1782 | 0.0000 |
| 1.4703 | 0.93 | 1500 | 2.8979 | 0.0000 |
| 1.6507 | 1.23 | 2000 | 2.2250 | 0.0000 |
| 1.6791 | 1.54 | 2500 | 2.0530 | 0.0000 |
| 1.4587 | 1.85 | 3000 | 1.9121 | 0.0000 |
| 1.288 | 2.16 | 3500 | 1.8705 | 0.0000 |
| 1.2244 | 2.47 | 4000 | 1.7940 | 0.0000 |
| 1.0364 | 2.78 | 4500 | 1.7768 | 0.0000 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 2.1.1.dev0
- Tokenizers 0.11.0
|
Dmitry12/sber | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
---
This model can be used to generate an input caption from a SMILES string.
## Example Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-base-smiles2caption", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-base-smiles2caption')
input_text = 'C1=CC2=C(C(=C1)[O-])NC(=CC2=O)C(=O)O'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, num_beams=5, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
DongHai/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-8
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4519
- Rouge1: 49.5671
- Rouge2: 27.0118
- Rougel: 30.8538
- Rougelsum: 45.5503
- Gen Len: 141.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 132 | 1.3159 | 48.5275 | 28.0817 | 30.6646 | 45.5024 | 142.0 |
| No log | 2.0 | 264 | 1.2377 | 47.0791 | 27.4386 | 28.9458 | 44.1536 | 142.0 |
| No log | 3.0 | 396 | 1.2474 | 49.3567 | 29.5904 | 30.8029 | 46.6083 | 142.0 |
| 0.9623 | 4.0 | 528 | 1.2914 | 47.8795 | 27.0611 | 29.8538 | 44.4494 | 142.0 |
| 0.9623 | 5.0 | 660 | 1.2982 | 49.9921 | 28.4839 | 31.5688 | 46.9734 | 142.0 |
| 0.9623 | 6.0 | 792 | 1.3521 | 46.7269 | 25.8672 | 29.7325 | 43.8279 | 142.0 |
| 0.9623 | 7.0 | 924 | 1.4102 | 47.4995 | 26.0066 | 29.4342 | 44.1102 | 141.8 |
| 0.3734 | 8.0 | 1056 | 1.4519 | 49.5671 | 27.0118 | 30.8538 | 45.5503 | 141.75 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DongHyoungLee/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
license: apache-2.0
---
## Example Usage
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("laituan245/molt5-large", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-large')
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
Dongmin/testmodel | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 11 | 2022-05-03T17:40:19Z | ---
license: apache-2.0
---
## Example Usage
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("laituan245/molt5-base", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-base')
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
Waynehillsdev/Waynehills_summary_tensorflow | [
"tf",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
---
## Example Usage
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("laituan245/molt5-small", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-small')
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
Waynehillsdev/wav2vec2-base-timit-demo-colab | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad-pytorch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad-pytorch
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Waynehillsdev/waynehills_sentimental_kor | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"ElectraForSequenceClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
language: en
tags:
- summarization
license: bsd-3-clause
datasets:
- xsum
---
Citation
```
@article{DBLP:journals/corr/abs-2110-07166,
author = {Prafulla Kumar Choubey and
Jesse Vig and
Wenhao Liu and
Nazneen Fatema Rajani},
title = {MoFE: Mixture of Factual Experts for Controlling Hallucinations in
Abstractive Summarization},
journal = {CoRR},
volume = {abs/2110.07166},
year = {2021},
url = {https://arxiv.org/abs/2110.07166},
eprinttype = {arXiv},
eprint = {2110.07166},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-07166.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Doohae/p_encoder | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-05-03T18:14:34Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-16
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8957
- Rouge1: 49.4097
- Rouge2: 29.3516
- Rougel: 31.527
- Rougelsum: 46.4241
- Gen Len: 141.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 132 | 1.3170 | 48.412 | 29.2017 | 31.6679 | 45.494 | 141.85 |
| No log | 2.0 | 264 | 1.2292 | 49.0133 | 29.6645 | 30.7612 | 46.1673 | 142.0 |
| No log | 3.0 | 396 | 1.2670 | 49.183 | 29.4104 | 31.573 | 46.7082 | 142.0 |
| 0.9596 | 4.0 | 528 | 1.3059 | 47.3854 | 26.6865 | 28.4666 | 44.4934 | 141.8 |
| 0.9596 | 5.0 | 660 | 1.3288 | 48.1189 | 26.9242 | 31.2938 | 45.3462 | 142.0 |
| 0.9596 | 6.0 | 792 | 1.4084 | 47.5713 | 26.7488 | 29.2959 | 45.1764 | 141.3 |
| 0.9596 | 7.0 | 924 | 1.5043 | 46.5407 | 26.0995 | 29.9007 | 43.9335 | 142.0 |
| 0.3369 | 8.0 | 1056 | 1.5115 | 49.6891 | 29.0514 | 32.33 | 46.9357 | 142.0 |
| 0.3369 | 9.0 | 1188 | 1.6131 | 47.5773 | 27.6348 | 30.5294 | 45.1151 | 142.0 |
| 0.3369 | 10.0 | 1320 | 1.6837 | 46.5699 | 26.3805 | 29.8581 | 43.5252 | 142.0 |
| 0.3369 | 11.0 | 1452 | 1.7874 | 47.1383 | 26.535 | 30.1724 | 44.2508 | 142.0 |
| 0.148 | 12.0 | 1584 | 1.7776 | 49.8061 | 30.1994 | 33.2405 | 47.6102 | 142.0 |
| 0.148 | 13.0 | 1716 | 1.8144 | 48.4451 | 28.2949 | 30.9026 | 45.6614 | 142.0 |
| 0.148 | 14.0 | 1848 | 1.8646 | 50.1964 | 30.4426 | 32.8156 | 47.4134 | 142.0 |
| 0.148 | 15.0 | 1980 | 1.8829 | 48.8129 | 29.2358 | 32.3247 | 46.2233 | 142.0 |
| 0.0726 | 16.0 | 2112 | 1.8957 | 49.4097 | 29.3516 | 31.527 | 46.4241 | 141.9 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Doquey/DialoGPT-small-Luisbot1 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: data2vec-text-base-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9231651376146789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-sst2
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3600
- Accuracy: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.1519343408010398e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2865 | 1.0 | 4210 | 0.2662 | 0.9128 |
| 0.2256 | 2.0 | 8420 | 0.3698 | 0.9002 |
| 0.1676 | 3.0 | 12630 | 0.3107 | 0.9186 |
| 0.1481 | 4.0 | 16840 | 0.3425 | 0.9186 |
| 0.1429 | 5.0 | 21050 | 0.3600 | 0.9232 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DoyyingFace/bert-asian-hate-tweets-asian-clean-with-unclean-valid | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: cc-by-nc-4.0
---
Placeholder for North-T5x |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-4 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 44 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-32
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2324
- Rouge1: 46.462
- Rouge2: 25.9506
- Rougel: 29.4584
- Rougelsum: 44.1863
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 132 | 1.3139 | 48.8247 | 29.2173 | 31.7628 | 45.8992 | 142.0 |
| No log | 2.0 | 264 | 1.2287 | 47.9398 | 29.4061 | 30.9133 | 44.9142 | 140.9 |
| No log | 3.0 | 396 | 1.2676 | 49.2743 | 30.4469 | 32.8893 | 46.6208 | 142.0 |
| 0.9578 | 4.0 | 528 | 1.3218 | 47.315 | 26.7303 | 30.5007 | 44.7654 | 142.0 |
| 0.9578 | 5.0 | 660 | 1.3173 | 47.1476 | 25.9408 | 29.4257 | 44.4956 | 142.0 |
| 0.9578 | 6.0 | 792 | 1.4283 | 47.5836 | 27.1572 | 29.8553 | 44.8858 | 142.0 |
| 0.9578 | 7.0 | 924 | 1.5005 | 46.6839 | 26.2214 | 30.1895 | 43.8753 | 140.75 |
| 0.3306 | 8.0 | 1056 | 1.5316 | 47.7611 | 27.1105 | 30.8142 | 44.7598 | 142.0 |
| 0.3306 | 9.0 | 1188 | 1.6295 | 48.4416 | 27.6912 | 30.3409 | 45.317 | 142.0 |
| 0.3306 | 10.0 | 1320 | 1.6564 | 46.5751 | 27.2306 | 29.7265 | 43.7327 | 142.0 |
| 0.3306 | 11.0 | 1452 | 1.7471 | 47.9684 | 27.5739 | 30.7018 | 44.6852 | 141.75 |
| 0.145 | 12.0 | 1584 | 1.7700 | 47.9274 | 28.5129 | 31.129 | 45.1009 | 142.0 |
| 0.145 | 13.0 | 1716 | 1.8391 | 49.8091 | 30.1597 | 33.6004 | 47.2007 | 141.95 |
| 0.145 | 14.0 | 1848 | 1.9212 | 45.2195 | 25.033 | 27.4181 | 42.6161 | 142.0 |
| 0.145 | 15.0 | 1980 | 1.9267 | 48.4959 | 28.1 | 31.2796 | 46.2758 | 142.0 |
| 0.0723 | 16.0 | 2112 | 1.9130 | 47.0765 | 27.4929 | 30.6862 | 44.1458 | 142.0 |
| 0.0723 | 17.0 | 2244 | 1.9514 | 48.5354 | 28.4909 | 31.8966 | 45.7116 | 142.0 |
| 0.0723 | 18.0 | 2376 | 2.0064 | 47.9339 | 28.6862 | 32.4472 | 45.3704 | 142.0 |
| 0.042 | 19.0 | 2508 | 2.0210 | 48.3169 | 28.1579 | 30.2681 | 45.3831 | 141.3 |
| 0.042 | 20.0 | 2640 | 2.0377 | 46.8156 | 26.0122 | 28.817 | 43.9383 | 142.0 |
| 0.042 | 21.0 | 2772 | 2.0587 | 46.3813 | 27.3555 | 29.875 | 43.6605 | 142.0 |
| 0.042 | 22.0 | 2904 | 2.0695 | 45.6728 | 26.0639 | 29.5653 | 42.3772 | 142.0 |
| 0.025 | 23.0 | 3036 | 2.1617 | 46.7283 | 26.2082 | 28.52 | 43.3304 | 142.0 |
| 0.025 | 24.0 | 3168 | 2.1375 | 48.1347 | 28.3444 | 31.7509 | 45.4907 | 142.0 |
| 0.025 | 25.0 | 3300 | 2.1911 | 47.3358 | 27.1479 | 29.4923 | 44.0087 | 142.0 |
| 0.025 | 26.0 | 3432 | 2.1806 | 47.2218 | 26.8421 | 30.03 | 44.2417 | 142.0 |
| 0.0153 | 27.0 | 3564 | 2.1890 | 46.3745 | 27.0095 | 29.7274 | 43.3372 | 142.0 |
| 0.0153 | 28.0 | 3696 | 2.2235 | 50.1274 | 30.8817 | 32.8766 | 46.7486 | 141.5 |
| 0.0153 | 29.0 | 3828 | 2.2236 | 50.1785 | 30.8079 | 32.8886 | 46.9888 | 142.0 |
| 0.0153 | 30.0 | 3960 | 2.2312 | 46.7468 | 26.4272 | 30.1175 | 43.9132 | 142.0 |
| 0.0096 | 31.0 | 4092 | 2.2287 | 47.558 | 26.3933 | 29.9122 | 44.5752 | 142.0 |
| 0.0096 | 32.0 | 4224 | 2.2324 | 46.462 | 25.9506 | 29.4584 | 44.1863 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-8 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-25000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9314
- name: F1
type: f1
value: 0.932017283069727
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-25000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3711
- Accuracy: 0.9314
- F1: 0.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-100 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | how to start prompt:
```
wordy:
```
example:
```
wordy: the ndp has turned into the country's darling of the young.
```
output:
```
the ndp is youth-driven.
```
OR
```
informal english:
```
example:
```
informal english: corn fields are all across illinois, visible once you leave chicago.
```
output:
```
corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
``` |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-50 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0122
- eval_runtime: 27.9861
- eval_samples_per_second: 35.732
- eval_steps_per_second: 0.572
- epoch: 2.13
- step: 334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-75 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | null | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln41")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln41")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
``` |
DoyyingFace/bert-asian-hate-tweets-concat-clean | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- chime6
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/simpleoier_chime6_asr_transformer_wavlm_lr1e-3`
This model was trained by simpleoier using chime6 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b757b89d45d5574cebf44e225cbe32e3e9e4f522
pip install -e .
cd egs2/chime6/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_chime6_asr_transformer_wavlm_lr1e-3
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue May 3 16:47:10 EDT 2022`
- python version: `3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0]`
- espnet version: `espnet 202204`
- pytorch version: `pytorch 1.10.1`
- Git hash: `b757b89d45d5574cebf44e225cbe32e3e9e4f522`
- Commit date: `Mon May 2 09:21:08 2022 -0400`
## asr_train_asr_transformer_wavlm_lr1e-3_specaug_accum1_preenc128_warmup20k_raw_en_bpe1000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_asr_model_1epoch/dev_gss_multiarray|7437|58881|66.5|21.3|12.2|8.8|42.3|77.4|
|decode_asr_transformer_asr_model_2epoch/dev_gss_multiarray|7437|58881|68.6|20.7|10.6|8.4|39.8|77.5|
|decode_asr_transformer_asr_model_3epoch/dev_gss_multiarray|7437|58881|67.5|20.3|12.2|8.0|40.5|76.5|
|decode_asr_transformer_asr_model_5epoch/dev_gss_multiarray|7437|58881|67.7|21.4|10.9|8.6|40.9|77.9|
|decode_asr_transformer_asr_model_7epoch/dev_gss_multiarray|7437|58881|66.6|20.9|12.5|8.2|41.6|77.8|
|decode_asr_transformer_asr_model_valid.acc.ave/dev_gss_multiarray|0|0|0.0|0.0|0.0|0.0|0.0|0.0|
|decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|58881|69.4|20.2|10.4|8.6|39.1|75.8|
|decode_asr_transformer_lw0.5_lm_lm_train_lm_en_bpe1000_valid.loss.ave_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|58881|65.7|20.2|14.1|7.5|41.8|77.8|
|decode_asr_transformer_lw0.5_ngram_ngram_3gram_asr_model_valid.acc.ave/dev_gss_multiarray|7437|58881|65.7|19.0|15.3|6.2|40.6|78.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_asr_model_1epoch/dev_gss_multiarray|7437|280767|78.1|7.7|14.1|9.1|31.0|77.9|
|decode_asr_transformer_asr_model_2epoch/dev_gss_multiarray|7437|280767|80.0|7.6|12.5|8.7|28.8|78.1|
|decode_asr_transformer_asr_model_3epoch/dev_gss_multiarray|7437|280767|78.6|7.3|14.1|8.1|29.5|77.5|
|decode_asr_transformer_asr_model_5epoch/dev_gss_multiarray|7437|280767|79.5|7.7|12.8|9.1|29.6|78.8|
|decode_asr_transformer_asr_model_7epoch/dev_gss_multiarray|7437|280767|77.9|7.6|14.5|8.3|30.3|78.6|
|decode_asr_transformer_asr_model_valid.acc.ave/dev_gss_multiarray|0|0|0.0|0.0|0.0|0.0|0.0|0.0|
|decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|280767|80.6|7.4|12.0|8.9|28.3|76.6|
|decode_asr_transformer_lw0.5_lm_lm_train_lm_en_bpe1000_valid.loss.ave_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|280767|76.5|7.4|16.1|7.7|31.2|78.5|
|decode_asr_transformer_lw0.5_ngram_ngram_3gram_asr_model_valid.acc.ave/dev_gss_multiarray|7437|280767|77.0|7.6|15.4|7.2|30.2|79.8|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_asr_model_1epoch/dev_gss_multiarray|7437|92680|65.8|18.8|15.4|8.7|42.9|78.0|
|decode_asr_transformer_asr_model_2epoch/dev_gss_multiarray|7437|92680|67.9|18.1|13.9|8.2|40.3|78.2|
|decode_asr_transformer_asr_model_3epoch/dev_gss_multiarray|7437|92680|66.9|17.8|15.2|8.0|41.1|77.7|
|decode_asr_transformer_asr_model_5epoch/dev_gss_multiarray|7437|92680|67.2|18.5|14.3|8.2|40.9|78.9|
|decode_asr_transformer_asr_model_7epoch/dev_gss_multiarray|7437|92680|66.1|18.2|15.7|7.8|41.7|78.6|
|decode_asr_transformer_asr_model_valid.acc.ave/dev_gss_multiarray|0|0|0.0|0.0|0.0|0.0|0.0|0.0|
|decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|92680|68.9|17.7|13.4|8.2|39.3|76.6|
|decode_asr_transformer_lw0.5_lm_lm_train_lm_en_bpe1000_valid.loss.ave_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|92680|66.1|19.1|14.8|10.2|44.1|78.6|
|decode_asr_transformer_lw0.5_ngram_ngram_3gram_asr_model_valid.acc.ave/dev_gss_multiarray|7437|92680|66.0|19.9|14.1|9.5|43.6|79.8|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_transformer_wavlm_lr1e-3_specaug_accum1_preenc128_warmup20k.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_wavlm_lr1e-3_specaug_accum1_preenc128_warmup20k_raw_en_bpe1000_sp
ngpu: 0
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: null
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 8
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 48
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe1000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe1000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe1000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe1000_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_worn_simu_u400k_cleaned_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_worn_simu_u400k_cleaned_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_gss_multiarray/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev_gss_multiarray/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 20000
token_list:
- <blank>
- <unk>
- '[inaudible]'
- '[laughs]'
- '[noise]'
- ▁
- s
- ''''
- ▁i
- ▁it
- t
- ▁you
- ▁the
- ▁yeah
- ▁a
- ▁like
- ▁that
- ▁and
- ▁to
- m
- ▁oh
- ▁so
- '-'
- e
- re
- a
- ▁just
- ▁no
- d
- ▁we
- n
- ▁in
- ing
- i
- ▁of
- ▁do
- ▁is
- ▁have
- ▁what
- ▁was
- ▁this
- ▁can
- o
- ▁one
- r
- ▁but
- er
- y
- ▁they
- ed
- ▁uh
- ▁for
- ▁okay
- ▁there
- ▁be
- ▁he
- ▁don
- g
- ll
- ▁right
- p
- ▁not
- u
- ▁on
- c
- ▁then
- ▁know
- ▁my
- ▁or
- ▁get
- ▁are
- ▁all
- ▁um
- ▁me
- ▁if
- ▁go
- ▁good
- ▁with
- ▁really
- b
- ▁gonna
- ▁think
- ▁cuz
- in
- ▁your
- k
- ve
- le
- w
- an
- ▁she
- l
- ▁well
- en
- f
- ▁up
- al
- ▁two
- h
- ar
- ▁how
- ▁mhm
- v
- ▁here
- ly
- ▁put
- ▁out
- ▁would
- ▁at
- ▁need
- ▁did
- ▁f
- ▁want
- ▁mm
- ▁more
- ch
- ri
- ▁now
- or
- ▁when
- ▁k
- ▁p
- ▁see
- ▁got
- ▁too
- ▁thing
- ▁time
- 'on'
- ▁actually
- ▁where
- ne
- ▁guys
- ▁some
- ▁had
- ▁why
- ic
- ▁them
- ▁st
- ro
- ▁make
- ur
- ▁three
- ▁b
- ▁mean
- ▁wanna
- ▁should
- at
- ▁from
- th
- ▁didn
- ▁about
- ▁yes
- ▁because
- ▁yep
- ▁people
- ▁co
- ▁could
- ▁were
- ▁take
- ▁has
- ▁something
- ce
- ▁w
- ▁c
- ▁sure
- ▁who
- ▁other
- ▁sh
- ▁say
- ▁an
- ▁her
- ▁g
- ▁work
- il
- es
- ▁little
- el
- ▁much
- ▁eat
- ▁still
- ▁wait
- ▁ma
- ▁four
- ▁de
- ▁only
- ▁down
- ▁though
- ▁way
- ▁lot
- ▁use
- ▁over
- ▁let
- ▁pretty
- ▁these
- ▁bo
- ▁any
- ▁off
- ▁ba
- ▁di
- ▁d
- ▁back
- ▁sorry
- ▁those
- ▁very
- ▁bit
- ▁even
- li
- ▁stuff
- ke
- ate
- z
- ▁probably
- ▁nice
- ▁turn
- ▁doesn
- ▁first
- ▁does
- ▁hmm
- ▁look
- ▁going
- ▁play
- ▁ho
- pe
- ▁maybe
- ▁come
- ▁fine
- ▁cut
- ▁man
- ▁bu
- ▁ca
- ▁mo
- ▁th
- lo
- ▁never
- ry
- ▁po
- ▁h
- ▁will
- us
- x
- ge
- ▁five
- ▁start
- ▁him
- ▁long
- ▁give
- ▁se
- ting
- ▁sp
- ▁ra
- ▁done
- ▁con
- ▁big
- ▁his
- ▁y
- ▁which
- ▁been
- ▁dunno
- est
- ion
- ▁fa
- ▁than
- me
- ▁our
- ▁also
- ▁six
- ▁kinda
- co
- ▁cool
- ty
- ▁game
- ▁thought
- ▁fi
- ▁after
- ▁day
- ▁doing
- ment
- ▁said
- ▁whatever
- ap
- ▁place
- ▁anything
- ▁j
- ▁guess
- em
- ▁always
- ▁things
- ▁card
- ▁li
- ▁thank
- ▁last
- ▁before
- ▁many
- ▁watch
- ▁pa
- ▁year
- ▁ah
- ▁hot
- ▁into
- ▁ten
- ▁keep
- ▁bad
- tion
- ▁us
- ▁cr
- ▁part
- ▁cook
- ▁o
- ▁cards
- ▁everything
- ▁la
- ▁ha
- ▁by
- ▁wow
- ▁their
- ies
- ▁hey
- ▁same
- ▁went
- ▁pick
- ▁might
- ▁sc
- ▁ex
- ie
- ▁wood
- ight
- ▁another
- ▁better
- ▁try
- ard
- ▁seven
- ▁guy
- ▁point
- up
- op
- ▁twenty
- ▁hand
- ▁wh
- ▁food
- ▁tra
- ation
- ▁buy
- ▁kind
- ist
- ▁whole
- ive
- is
- ▁half
- able
- ▁pro
- ▁win
- ▁different
- ▁cl
- age
- ▁already
- ▁gotta
- ack
- ▁ti
- ▁lo
- ▁every
- ▁super
- ▁again
- ▁new
- ▁remember
- ers
- ▁dude
- um
- ▁feel
- ▁roll
- ▁cheese
- ▁na
- ▁sit
- ▁sa
- way
- ▁hard
- ▁enough
- 'no'
- ▁eight
- ity
- ▁friend
- ▁un
- ul
- ▁love
- ▁salt
- ▁mi
- ▁steak
- ▁nine
- ▁else
- ▁looks
- ▁pu
- ▁fl
- ▁build
- ▁pre
- ▁end
- ▁ta
- ▁salad
- ▁high
- ▁find
- ▁water
- ▁usually
- ▁small
- ▁around
- ▁butter
- ▁car
- ▁made
- ▁wash
- ▁move
- ▁plate
- ▁true
- ▁pan
- ain
- cu
- ▁nope
- ▁ooh
- ▁sauce
- ▁help
- ▁wa
- ▁left
- ▁person
- uck
- ▁top
- ▁side
- ▁cha
- ▁god
- ▁leave
- ▁goes
- ▁weird
- ▁each
- ▁r
- ▁basically
- ▁chicken
- ted
- ▁oil
- ▁trying
- ▁fun
- ▁close
- ▁taste
- ▁old
- ▁show
- ble
- ▁next
- ▁name
- ▁used
- ▁mine
- ous
- ▁great
- ▁pot
- ally
- ▁burn
- ▁huh
- ▁minutes
- ▁once
- ▁phone
- ▁bowl
- tic
- ▁tell
- ound
- ▁ask
- ▁mu
- ▁thirty
- ▁someone
- ▁piece
- ▁saying
- ▁vi
- ish
- ▁ja
- ▁comp
- ▁called
- ▁through
- ▁gr
- ize
- ▁everyone
- ▁funny
- ▁getting
- ▁won
- ▁bl
- ▁away
- ▁pi
- ▁chi
- ▁totally
- ▁red
- ▁word
- ▁hundred
- ▁open
- ▁dollar
- ▁stone
- ▁yet
- ade
- ▁du
- ▁mmm
- ▁sound
- ▁both
- ▁mar
- ant
- ▁potatoes
- ▁garlic
- fi
- ▁hear
- ▁pass
- ▁saw
- ▁kill
- ▁second
- ▁girl
- ▁shit
- ▁throw
- ▁bought
- ▁please
- ▁che
- ▁da
- ▁hit
- ▁tea
- ▁hold
- ▁shoot
- ▁most
- ▁clean
- ▁wanted
- ▁pepper
- ▁happen
- ▁aw
- ▁home
- ▁drink
- ance
- ▁yo
- ▁sheep
- ▁while
- ▁ro
- ▁house
- ▁call
- ▁meat
- ▁face
- ▁fuck
- ▁talking
- ▁green
- ries
- side
- ▁set
- ▁exactly
- huh
- ▁hour
- ▁ready
- ▁played
- ▁finish
- ▁add
- ▁susie
- q
- ▁stop
- ▁almost
- ▁bring
- ▁rice
- ▁ear
- ▁sweet
- ▁hi
- ▁pizza
- ake
- ▁wi
- ▁gra
- ▁free
- ▁night
- ▁pay
- ▁rick
- ▁full
- ▁wheat
- ▁count
- ▁white
- ful
- ▁light
- ▁plan
- ▁supposed
- ▁either
- ▁bacon
- ▁sim
- ▁sense
- ▁blue
- ▁team
- ▁interesting
- ▁care
- ▁room
- nut
- ward
- ▁real
- ▁week
- ▁heard
- ▁told
- ▁mind
- ▁table
- ▁head
- ash
- ▁looking
- ▁ever
- ▁check
- ▁together
- ▁ju
- ▁app
- ▁grab
- ▁brown
- ▁eh
- book
- ▁stick
- ▁later
- ▁pea
- ▁talk
- ▁awesome
- ▁cream
- ling
- ▁fifty
- ▁color
- ▁qu
- ▁round
- ▁nothing
- ▁power
- ▁deal
- ▁matter
- ▁player
- ▁draw
- ▁having
- ▁kid
- ▁fish
- ▁damn
- ▁own
- ▁crazy
- ▁dad
- ▁took
- ▁perfect
- ▁idea
- ▁couple
- ▁live
- ▁job
- ▁smell
- ▁number
- ▁reason
- ▁best
- ▁forty
- ▁making
- ▁dinner
- ▁change
- ▁playing
- ▁sometimes
- ▁fridge
- ▁miss
- j
- ▁woah
- ▁chancey
- ▁bucks
- ▁brick
- ▁rec
- ▁run
- ▁far
- ball
- ▁bread
- ▁fast
- ▁knife
- ▁black
- ▁break
- ▁mix
- ▁today
- ▁cheap
- ▁mike
- ▁expensive
- out
- ▁normal
- ▁under
- ▁using
- ▁double
- ▁gold
- ▁life
- ▁oven
- ▁less
- ▁space
- ▁wine
- ence
- land
- ▁sea
- ▁corn
- ▁cooking
- ▁stay
- ▁line
- ▁may
- ▁bar
- ▁block
- ▁late
- ▁yourself
- ▁quite
- ▁apple
- ▁extra
- ▁wedding
- ▁happened
- ▁kitchen
- ▁coming
- ▁zero
- ▁definitely
- ▁connect
- ▁read
- ▁crab
- ▁easier
- ▁mkay
- ▁egg
- ▁came
- ▁money
- ▁anyone
- ▁save
- ▁problem
- ▁club
- ▁tried
- ▁wrong
- ▁spot
- ▁low
- ▁amazing
- ▁milk
- ▁jeff
- ▁flip
- ▁text
- ▁bottle
- jo
- ▁without
- ▁parents
- ▁anymore
- ▁course
- ship
- ▁month
- ▁chinese
- ▁must
- ▁movie
- ▁wonder
- ▁bunch
- ▁family
- ▁season
- ▁quick
- ▁past
- ▁paul
- ▁rid
- ▁tennis
- town
- ▁cold
- ▁serious
- ▁drive
- ▁boil
- ▁screw
- ▁least
- ▁everybody
- ▁sort
- ▁thomas
- ▁rest
- ▁suck
- ▁road
- ▁fair
- ▁forgot
- ▁order
- ▁middle
- ▁babe
- ▁bang
- ▁dress
- ▁sleep
- ▁question
- ▁until
- ▁sheriff
- ▁chop
- ▁restaurant
- ▁outside
- ▁learn
- ▁stand
- ▁walk
- ▁attack
- ▁trade
- ▁phil
- ▁few
- ▁strong
- ▁school
- ▁world
- ▁company
- ▁easy
- ▁hockey
- ▁somebody
- ▁short
- ▁figure
- ▁spice
- ▁apparently
- ▁since
- ▁serve
- ▁huge
- ▁saboteur
- ▁fifteen
- ▁myself
- ▁such
- ▁port
- ▁literally
- ▁lose
- ▁crap
- ught
- ▁gosh
- ▁unless
- ▁joke
- ▁store
- ▁bigger
- ▁spell
- ▁ago
- ▁hang
- ▁depend
- ▁ginger
- ▁slow
- ▁medium
- ▁record
- acti
- ▁kenny
- ▁picture
- old
- ▁thousand
- ▁cover
- ▁tree
- ▁obvious
- ▁glass
- ▁taking
- ▁letter
- ▁eleven
- ▁skin
- ▁market
- ▁anybody
- ▁ahead
- ▁morning
- ▁brand
- ▁paper
- ▁lemon
- ▁onions
- ▁juice
- ▁jimmy
- ▁living
- ▁front
- ▁bottom
- ▁dark
- ▁oops
- ▁arjan
- ▁shot
- ▁rule
- ▁hun
- ▁flavor
- ▁speak
- ▁gun
- ▁potato
- ▁worry
- ▁twelve
- ▁sandwich
- ▁plus
- ▁believe
- ▁knew
- ▁realize
- ▁sugar
- ▁happy
- ▁sister
- ▁entire
- ▁master
- ▁eye
- ▁touch
- ▁wenny
- ▁drop
- ▁price
- ▁slice
- ▁sword
- ▁spicy
- ▁listen
- ▁outlaw
- que
- ▁percent
- ▁yesterday
- ▁mushroom
- ▁worth
- ▁proper
- ▁story
- ▁megan
- ▁character
- ▁hair
- ▁straight
- ▁discard
- ▁spoon
- ▁understand
- ▁computer
- ▁type
- ▁nikki
- ▁tomorrow
- ▁trump
- ▁third
- ▁bennet
- ▁nobody
- ▁somewhere
- ▁amount
- ▁split
- ▁accent
- ▁group
- ▁trip
- ▁lunch
- ▁racket
- ▁level
- ▁difference
- ▁orange
- ▁gave
- ▁dessert
- ▁single
- ▁chocolate
- ▁junette
- ▁camera
- ▁regular
- ▁video
- ▁gross
- ▁notice
- ▁actual
- ▁between
- ▁surprise
- ▁smart
- ▁east
- ▁craft
- ▁rock
- ▁certain
- ▁rather
- ▁lobster
- ▁photo
- ▁favorite
- ▁behind
- ▁across
- ▁steal
- ▁spend
- ▁weekend
- ▁special
- ▁sign
- ▁wrap
- ▁except
- ▁john
- ▁conversation
- ▁asian
- ▁grand
- ▁online
- ▁explain
- ▁dishes
- ▁magic
- ▁decide
- ▁fancy
- ▁random
- ▁tunnel
- ▁switch
- ▁transcribe
- ▁english
- ▁giant
- ▁kick
- ▁claire
- ▁laugh
- ▁yellow
- ▁delicious
- ▁freeze
- ▁drunk
- ▁general
- ▁gimme
- ▁damage
- ▁breakfast
- ▁roast
- ▁josh
- ▁choose
- ▁email
- ▁direct
- ▁tomatoes
- ▁fruit
- ▁apart
- ▁chopstick
- ▁vancouver
- ▁kept
- tract
- ▁chunk
- ▁girlfriend
- ▁shuffle
- ▁terrible
- ▁diamond
- ▁sausage
- ▁sweat
- ▁iphone
- ▁pineapple
- ▁summer
- ▁french
- ▁fresh
- ▁heavy
- ▁million
- ▁instead
- ▁ridiculous
- ▁tough
- ▁friday
- ▁whenever
- ▁coffee
- ▁hilarious
- ▁worried
- ▁especially
- ▁shrimp
- ▁avocado
- '&'
- ä
- '#'
- ǎ
- î
- ü
- ǐ
- ñ
- â
- ç
- ']'
- é
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_large
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 100
num_freq_mask: 4
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 128
encoder: transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d2
normalize_before: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.0
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: '202204'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
albert-large-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26,792 | 2022-05-03T21:34:00Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-64
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4772
- Rouge1: 46.5444
- Rouge2: 27.4056
- Rougel: 29.6779
- Rougelsum: 44.0905
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 64
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 132 | 1.3213 | 48.3389 | 28.6641 | 31.4086 | 45.6679 | 142.0 |
| No log | 2.0 | 264 | 1.2325 | 48.798 | 29.3068 | 31.4329 | 45.7945 | 142.0 |
| No log | 3.0 | 396 | 1.2791 | 47.1449 | 27.3965 | 30.56 | 44.4704 | 142.0 |
| 0.9574 | 4.0 | 528 | 1.3134 | 46.2319 | 25.6249 | 28.7673 | 43.7555 | 140.3 |
| 0.9574 | 5.0 | 660 | 1.3187 | 46.7313 | 25.3467 | 29.3873 | 43.9495 | 142.0 |
| 0.9574 | 6.0 | 792 | 1.4271 | 48.1638 | 27.8874 | 30.5334 | 45.9944 | 142.0 |
| 0.9574 | 7.0 | 924 | 1.4876 | 46.7481 | 25.7259 | 29.7214 | 43.7042 | 140.5 |
| 0.3303 | 8.0 | 1056 | 1.5259 | 46.7075 | 26.0716 | 29.5521 | 43.7312 | 142.0 |
| 0.3303 | 9.0 | 1188 | 1.6223 | 48.012 | 27.2795 | 30.4989 | 45.4644 | 142.0 |
| 0.3303 | 10.0 | 1320 | 1.6842 | 48.0074 | 26.8831 | 29.3396 | 45.1937 | 142.0 |
| 0.3303 | 11.0 | 1452 | 1.7317 | 46.52 | 26.5152 | 29.5124 | 43.8797 | 142.0 |
| 0.1478 | 12.0 | 1584 | 1.8087 | 47.5887 | 27.0488 | 29.8569 | 44.7318 | 140.8 |
| 0.1478 | 13.0 | 1716 | 1.8263 | 46.1251 | 25.8576 | 30.1698 | 42.7228 | 142.0 |
| 0.1478 | 14.0 | 1848 | 1.9459 | 46.4034 | 25.7039 | 28.2542 | 43.7254 | 142.0 |
| 0.1478 | 15.0 | 1980 | 1.9539 | 44.4666 | 24.5827 | 27.7147 | 41.9769 | 142.0 |
| 0.0779 | 16.0 | 2112 | 1.9654 | 47.2267 | 26.4562 | 29.7352 | 44.0823 | 142.0 |
| 0.0779 | 17.0 | 2244 | 1.9580 | 48.5086 | 28.0294 | 30.8311 | 45.6336 | 142.0 |
| 0.0779 | 18.0 | 2376 | 2.0065 | 48.293 | 28.5678 | 30.0243 | 45.1384 | 142.0 |
| 0.0499 | 19.0 | 2508 | 1.9313 | 49.0549 | 28.9695 | 32.0711 | 46.3834 | 142.0 |
| 0.0499 | 20.0 | 2640 | 2.0176 | 47.0121 | 25.1606 | 29.0108 | 44.1556 | 142.0 |
| 0.0499 | 21.0 | 2772 | 2.0711 | 48.3754 | 28.2221 | 30.772 | 45.8547 | 140.95 |
| 0.0499 | 22.0 | 2904 | 2.0848 | 45.7392 | 25.254 | 29.0833 | 43.0381 | 142.0 |
| 0.0335 | 23.0 | 3036 | 2.0711 | 47.2931 | 27.4573 | 30.718 | 44.5932 | 142.0 |
| 0.0335 | 24.0 | 3168 | 2.1200 | 50.515 | 30.4253 | 33.7045 | 47.6158 | 142.0 |
| 0.0335 | 25.0 | 3300 | 2.1097 | 46.4737 | 26.3055 | 29.0148 | 43.2135 | 142.0 |
| 0.0335 | 26.0 | 3432 | 2.1695 | 46.9099 | 26.5227 | 29.7757 | 44.0613 | 142.0 |
| 0.0249 | 27.0 | 3564 | 2.1494 | 47.8319 | 27.6364 | 31.3593 | 45.065 | 141.95 |
| 0.0249 | 28.0 | 3696 | 2.1510 | 47.504 | 26.8971 | 31.7196 | 45.0328 | 142.0 |
| 0.0249 | 29.0 | 3828 | 2.1612 | 46.8789 | 27.266 | 30.1009 | 43.8248 | 142.0 |
| 0.0249 | 30.0 | 3960 | 2.1579 | 47.7012 | 27.7761 | 30.935 | 44.3686 | 142.0 |
| 0.018 | 31.0 | 4092 | 2.1981 | 48.4703 | 29.167 | 31.9815 | 45.8005 | 142.0 |
| 0.018 | 32.0 | 4224 | 2.2332 | 45.9512 | 25.8111 | 29.2467 | 42.9234 | 142.0 |
| 0.018 | 33.0 | 4356 | 2.1944 | 47.7189 | 28.1413 | 30.9692 | 44.9361 | 142.0 |
| 0.018 | 34.0 | 4488 | 2.2589 | 50.9687 | 32.3987 | 36.5644 | 48.3938 | 142.0 |
| 0.0132 | 35.0 | 4620 | 2.2269 | 47.8241 | 28.0442 | 31.5535 | 44.9394 | 142.0 |
| 0.0132 | 36.0 | 4752 | 2.2865 | 47.4383 | 27.0825 | 30.4109 | 44.194 | 142.0 |
| 0.0132 | 37.0 | 4884 | 2.3267 | 49.1786 | 29.6416 | 32.875 | 46.8821 | 142.0 |
| 0.0095 | 38.0 | 5016 | 2.2872 | 48.2085 | 28.3304 | 32.1473 | 45.3571 | 142.0 |
| 0.0095 | 39.0 | 5148 | 2.3340 | 46.6762 | 26.1637 | 29.0149 | 43.5923 | 142.0 |
| 0.0095 | 40.0 | 5280 | 2.3425 | 46.7561 | 26.1645 | 29.6337 | 43.6188 | 142.0 |
| 0.0095 | 41.0 | 5412 | 2.3111 | 49.4118 | 29.9761 | 33.4765 | 46.601 | 142.0 |
| 0.0076 | 42.0 | 5544 | 2.3892 | 45.3335 | 25.0161 | 28.4124 | 41.9873 | 142.0 |
| 0.0076 | 43.0 | 5676 | 2.3808 | 46.2506 | 26.4283 | 29.3841 | 42.7488 | 142.0 |
| 0.0076 | 44.0 | 5808 | 2.3825 | 45.6823 | 26.0048 | 29.5501 | 42.6475 | 142.0 |
| 0.0076 | 45.0 | 5940 | 2.3592 | 47.9127 | 26.7924 | 30.2353 | 44.791 | 142.0 |
| 0.0051 | 46.0 | 6072 | 2.4206 | 46.0415 | 27.0681 | 29.9602 | 43.1225 | 142.0 |
| 0.0051 | 47.0 | 6204 | 2.4214 | 48.1229 | 29.0913 | 31.1828 | 45.0022 | 142.0 |
| 0.0051 | 48.0 | 6336 | 2.4176 | 47.3825 | 27.7622 | 30.4138 | 43.9047 | 142.0 |
| 0.0051 | 49.0 | 6468 | 2.4137 | 48.2544 | 28.277 | 31.5548 | 45.6053 | 142.0 |
| 0.0041 | 50.0 | 6600 | 2.4384 | 49.6459 | 30.186 | 33.0059 | 47.0483 | 142.0 |
| 0.0041 | 51.0 | 6732 | 2.4433 | 47.7279 | 27.7857 | 30.2982 | 45.0842 | 142.0 |
| 0.0041 | 52.0 | 6864 | 2.4068 | 48.6047 | 28.1758 | 31.2744 | 45.8336 | 142.0 |
| 0.0041 | 53.0 | 6996 | 2.4362 | 48.7095 | 29.3335 | 31.9509 | 46.4161 | 142.0 |
| 0.003 | 54.0 | 7128 | 2.4307 | 48.836 | 29.6069 | 32.4004 | 46.1986 | 142.0 |
| 0.003 | 55.0 | 7260 | 2.4292 | 47.2945 | 26.7577 | 28.9719 | 43.8988 | 142.0 |
| 0.003 | 56.0 | 7392 | 2.4425 | 45.2261 | 25.6879 | 28.8129 | 42.6474 | 142.0 |
| 0.0024 | 57.0 | 7524 | 2.4386 | 47.967 | 28.5415 | 32.2049 | 45.5111 | 142.0 |
| 0.0024 | 58.0 | 7656 | 2.4528 | 47.5552 | 27.6397 | 30.9151 | 44.2627 | 142.0 |
| 0.0024 | 59.0 | 7788 | 2.4574 | 46.7821 | 27.3368 | 30.6334 | 44.0533 | 142.0 |
| 0.0024 | 60.0 | 7920 | 2.4659 | 47.3507 | 26.8371 | 30.4566 | 44.4452 | 142.0 |
| 0.0018 | 61.0 | 8052 | 2.4766 | 47.9847 | 28.2678 | 30.0664 | 45.0071 | 142.0 |
| 0.0018 | 62.0 | 8184 | 2.4682 | 46.8392 | 27.1275 | 30.144 | 43.6379 | 142.0 |
| 0.0018 | 63.0 | 8316 | 2.4754 | 45.6338 | 26.2812 | 29.4831 | 42.8744 | 142.0 |
| 0.0018 | 64.0 | 8448 | 2.4772 | 46.5444 | 27.4056 | 29.6779 | 44.0905 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
albert-xlarge-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 341 | 2022-05-03T21:55:48Z | XLM-R pre-pretrained with MLM on GLUECoS, CMU DoG and EN-HI codemixed corpus. Further pretrained with NLI on MNLI corpus and finetuned on GLUECoS |
albert-xlarge-v2 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,973 | 2022-05-03T22:56:48Z | ## Swedish parliamentary motions party classifier
A model trained on Swedish parliamentary motions from 2018 to 2021. Outputs the probabilities for different parties being the originator of a given text. |
bert-base-cased-finetuned-mrpc | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11,644 | 2022-05-03T23:25:24Z | ## Sentiment classifier
Sentiment classifier for Swedish trained on ScandiSent dataset. |
bert-base-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,621,271 | 2022-05-03T23:25:25Z | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
pipeline_tag: text-classification
widget:
- text: "many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "background example"
- text: "a total of 192 mi patients and 140 control persons were included."
example_title: "methods example"
- text: "mi patients had 18 % higher plasma levels of map44 (iqr 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "results example"
- text: "the finding that a brief cb group intervention delivered by real-world providers significantly reduced mdd onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "conclusions example"
- text: "in order to understand and update the prevalence of myopia in taiwan, a nationwide survey was performed in 1995."
example_title: "objective example"
---
# albert-base-v2_pub_section
- original model file name: textclassifer_albert-base-v2_pubmed_full
- This is a fine-tuned checkpoint of `albert-base-v2` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## metadata
### training_parameters
- date_run: Apr-26-2022_t-04
- huggingface_tag: albert-base-v2
|
bert-base-chinese | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,377,486 | 2022-05-03T23:27:00Z | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
pipeline_tag: text-classification
widget:
- text: "Many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "BACKGROUND example"
- text: "A total of 192 MI patients and 140 control persons were included."
example_title: "METHODS example"
- text: "MI patients had 18 % higher plasma levels of MAp44 (IQR 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "RESULTS example"
- text: "The finding that a brief CB group intervention delivered by real-world providers significantly reduced MDD onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "CONCLUSIONS example"
- text: "In order to understand and update the prevalence of myopia in Taiwan, a nationwide survey was performed in 1995."
example_title: "OBJECTIVE example"
---
# scibert-scivocab-cased_pub_section
- original model file name: textclassifer_scibert_scivocab_cased_pubmed_20k
- This is a fine-tuned checkpoint of `allenai/scibert_scivocab_cased` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## metadata
### training_metrics
- date_run: Apr-26-2022_t-13
- huggingface_tag: allenai/scibert_scivocab_cased
- test_set: [{'test_accuracy': 0.8313589096069336, 'test_matthewscorrcoef': 0.7736952900886536, 'test_f1score': 0.8317078948020935, 'test_cross_entropy': 0.5242752432823181}]
### training_parameters
- NUM_EPOCHS: 12
- BATCH_SIZE: 32
- MAX_INPUT_LENGTH: 256
- TRAIN_FP16: True
- TRAIN_STRATEGY: freeze
- LR_SCHEDULE: reducelronplateau
- LR_INITIAL: 0.001
- WEIGHT_DECAY: 0.05
- UNFREEZE_EPOCH: 4
- hf_tag: allenai/scibert_scivocab_cased
- lowercased_input: False
- input_text_colname: description
- target_cls_colname: target
- num_classes: 5
- model_shortname: scibert_scivocab_cased
|
bert-base-german-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 175,983 | 2022-05-03T23:35:50Z | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
pipeline_tag: text-classification
widget:
- text: "Many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "BACKGROUND example"
- text: "A total of 192 MI patients and 140 control persons were included."
example_title: "METHODS example"
- text: "MI patients had 18 % higher plasma levels of MAp44 (IQR 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "RESULTS example"
- text: "The finding that a brief CB group intervention delivered by real-world providers significantly reduced MDD onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "CONCLUSIONS example"
- text: "In order to understand and update the prevalence of myopia in Taiwan, a nationwide survey was performed in 1995."
example_title: "OBJECTIVE example"
---
# biobert-v1.1_pub_section
- original model file name: textclassifer_biobert-v1.1_pubmed_20k
- This is a fine-tuned checkpoint of `dmis-lab/biobert-v1.1` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## metadata
### training_metrics
- val_accuracy: 0.8522772192955017
- val_matthewscorrcoef: 0.8009328246116638
- val_f1score: 0.8517481088638306
- val_cross_entropy: 0.4344026446342468
- epoch: 12.0
- train_accuracy_step: 0.8203125
- train_matthewscorrcoef_step: 0.7453048229217529
- train_f1score_step: 0.8245896100997925
- train_cross_entropy_step: 0.480397492647171
- train_accuracy_epoch: 0.8297363519668579
- train_matthewscorrcoef_epoch: 0.7703952193260193
- train_f1score_epoch: 0.8274592757225037
- train_cross_entropy_epoch: 0.5001224875450134
- test_accuracy: 0.8441678881645203
- test_matthewscorrcoef: 0.7905130982398987
- test_f1score: 0.8435087203979492
- test_cross_entropy: 0.4557005763053894
- date_run: Apr-22-2022_t-14
- huggingface_tag: dmis-lab/biobert-v1.1
|
bert-base-german-dbmdz-uncased | [
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68,305 | 2022-05-03T23:44:15Z | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
pipeline_tag: text-classification
tags:
- text-classification
- document sections
- sentence classification
- document classification
- medical
- health
- biomedical
widget:
- text: "many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "background example"
- text: "a total of 192 mi patients and 140 control persons were included."
example_title: "methods example"
- text: "mi patients had 18 % higher plasma levels of map44 (iqr 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "results example"
- text: "the finding that a brief cb group intervention delivered by real-world providers significantly reduced mdd onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "conclusions example"
- text: "in order to understand and update the prevalence of myopia in taiwan, a nationwide survey was performed in 1995."
example_title: "objective example"
---
# scibert-scivocab-uncased_pub_section
- original model file name: textclassifer_scibert_scivocab_uncased_pubmed_full
- This is a fine-tuned checkpoint of `allenai/scibert_scivocab_uncased` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## usage in python
install transformers as needed: `pip install -U transformers`
run the following, changing the example text to your use case:
```
from transformers import pipeline
model_tag = "ml4pubmed/scibert-scivocab-uncased_pub_section"
classifier = pipeline(
'text-classification',
model=model_tag,
)
prompt = """
Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.
"""
classifier(
prompt,
) # classify the sentence
```
## metadata
### training_metrics
- date_run: Apr-25-2022_t-03
- huggingface_tag: allenai/scibert_scivocab_uncased
### training_parameters
- date_run: Apr-25-2022_t-03
- huggingface_tag: allenai/scibert_scivocab_uncased
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.