modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Declan/NPR_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2023-01-29T06:57:18Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: mobilebert_add_GLUE_Experiment_logit_kd_stsb_128 results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue config: stsb split: validation args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.041438738522880283 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_add_GLUE_Experiment_logit_kd_stsb_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 1.1505 - Pearson: 0.0470 - Spearmanr: 0.0414 - Combined Score: 0.0442 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 2.524 | 1.0 | 45 | 1.3607 | -0.0066 | -0.0281 | -0.0174 | | 1.0877 | 2.0 | 90 | 1.1729 | 0.0446 | 0.0497 | 0.0472 | | 1.0648 | 3.0 | 135 | 1.1505 | 0.0470 | 0.0414 | 0.0442 | | 1.0737 | 4.0 | 180 | 1.1564 | 0.0472 | 0.0464 | 0.0468 | | 1.0445 | 5.0 | 225 | 1.1971 | 0.0529 | 0.0575 | 0.0552 | | 1.0296 | 6.0 | 270 | 1.1723 | 0.0578 | 0.0727 | 0.0652 | | 1.026 | 7.0 | 315 | 1.2735 | 0.0621 | 0.0606 | 0.0614 | | 1.0216 | 8.0 | 360 | 1.2214 | 0.0666 | 0.0700 | 0.0683 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
Declan/WallStreetJournal_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: mobilebert_add_GLUE_Experiment_logit_kd_stsb_256 results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue config: stsb split: validation args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.005083382635565227 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_add_GLUE_Experiment_logit_kd_stsb_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 1.1476 - Pearson: 0.0175 - Spearmanr: 0.0051 - Combined Score: 0.0113 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 2.1451 | 1.0 | 45 | 1.1476 | 0.0175 | 0.0051 | 0.0113 | | 1.0864 | 2.0 | 90 | 1.2303 | 0.0364 | 0.0268 | 0.0316 | | 1.0669 | 3.0 | 135 | 1.2794 | 0.0385 | 0.0299 | 0.0342 | | 1.0484 | 4.0 | 180 | 1.2755 | 0.0394 | 0.0387 | 0.0391 | | 1.0377 | 5.0 | 225 | 1.2931 | 0.0464 | 0.0436 | 0.0450 | | 1.0279 | 6.0 | 270 | 1.2147 | 0.0491 | 0.0574 | 0.0532 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
Declan/WallStreetJournal_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: mobilebert_add_GLUE_Experiment_logit_kd_wnli_256 results: - task: name: Text Classification type: text-classification dataset: name: GLUE WNLI type: glue config: wnli split: validation args: wnli metrics: - name: Accuracy type: accuracy value: 0.5633802816901409 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_add_GLUE_Experiment_logit_kd_wnli_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3452 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3473 | 1.0 | 5 | 0.3452 | 0.5634 | | 0.3469 | 2.0 | 10 | 0.3464 | 0.5634 | | 0.3467 | 3.0 | 15 | 0.3465 | 0.5634 | | 0.3465 | 4.0 | 20 | 0.3456 | 0.5634 | | 0.3466 | 5.0 | 25 | 0.3453 | 0.5634 | | 0.3466 | 6.0 | 30 | 0.3455 | 0.5634 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
Declan/WallStreetJournal_model_v4
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-findtuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9355 - name: F1 type: f1 value: 0.9356889728135667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-findtuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1680 - Accuracy: 0.9355 - F1: 0.9357 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7789 | 1.0 | 250 | 0.2603 | 0.9165 | 0.9153 | | 0.2091 | 2.0 | 500 | 0.1788 | 0.9275 | 0.9276 | | 0.1443 | 3.0 | 750 | 0.1680 | 0.9355 | 0.9357 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
Declan/WallStreetJournal_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.it split: train args: PAN-X.it metrics: - name: F1 type: f1 value: 0.846884028064383 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3252 - F1: 0.8469 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6217 | 1.0 | 420 | 0.3396 | 0.7677 | | 0.3206 | 2.0 | 840 | 0.3433 | 0.8114 | | 0.1871 | 3.0 | 1260 | 0.3252 | 0.8469 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.7.1 - Datasets 1.18.4 - Tokenizers 0.13.2
Declan/WallStreetJournal_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: mobilebert_add_GLUE_Experiment_logit_kd_mnli_256 results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue config: mnli split: validation_matched args: mnli metrics: - name: Accuracy type: accuracy value: 0.3295362082994304 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_add_GLUE_Experiment_logit_kd_mnli_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 1.7834 - Accuracy: 0.3295 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.8865 | 1.0 | 3068 | 1.7940 | 0.3274 | | 1.8864 | 2.0 | 6136 | 1.7940 | 0.3274 | | 1.8864 | 3.0 | 9204 | 1.7944 | 0.3274 | | 1.8864 | 4.0 | 12272 | 1.7940 | 0.3274 | | 1.8864 | 5.0 | 15340 | 1.7938 | 0.3274 | | 1.8864 | 6.0 | 18408 | 1.7940 | 0.3274 | | 1.8864 | 7.0 | 21476 | 1.7944 | 0.3274 | | 1.8864 | 8.0 | 24544 | 1.7939 | 0.3274 | | 1.8864 | 9.0 | 27612 | 1.7939 | 0.3274 | | 1.8863 | 10.0 | 30680 | 1.7940 | 0.3274 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
DeepChem/ChemBERTa-10M-MLM
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
90
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.en split: train args: PAN-X.en metrics: - name: F1 type: f1 value: 0.7092760180995475 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4990 - F1: 0.7093 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8727 | 1.0 | 295 | 0.5063 | 0.6186 | | 0.4633 | 2.0 | 590 | 0.5089 | 0.6561 | | 0.3075 | 3.0 | 885 | 0.4990 | 0.7093 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.7.1 - Datasets 1.18.4 - Tokenizers 0.13.2
DeepChem/ChemBERTa-5M-MTR
[ "pytorch", "roberta", "transformers" ]
null
{ "architectures": [ "RobertaForRegression" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.78 +/- 23.23 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DeepChem/ChemBERTa-77M-MLM
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,416
null
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2524 - F1: 0.8591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3574 | 1.0 | 5005 | 0.2632 | 0.7974 | | 0.2157 | 2.0 | 10010 | 0.2545 | 0.8385 | | 0.1376 | 3.0 | 15015 | 0.2524 | 0.8591 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.7.1 - Datasets 1.18.4 - Tokenizers 0.13.2
DeepChem/SmilesTokenizer_PubChem_1M
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
227
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: KoRiF/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
DeepESP/gpt2-spanish-medium
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "es", "dataset:ebooks", "transformers", "GPT-2", "Spanish", "ebooks", "nlg", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
340
null
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.23 inference: false datasets: - keremberke/hard-hat-detection model-index: - name: keremberke/yolov8m-hard-hat-detection results: - task: type: object-detection dataset: type: keremberke/hard-hat-detection name: hard-hat-detection split: validation metrics: - type: precision # since [email protected] is not available on hf.co/metrics value: 0.81115 # min: 0.0 - max: 1.0 name: [email protected](box) --- <div align="center"> <img width="640" alt="keremberke/yolov8m-hard-hat-detection" src="https://huggingface.co/keremberke/yolov8m-hard-hat-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['Hardhat', 'NO-Hardhat'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.24 ultralytics==8.0.23 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8m-hard-hat-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
DeepPavlov/bert-base-cased-conversational
[ "pytorch", "jax", "bert", "feature-extraction", "en", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3,009
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wnut_17 metrics: - precision - recall - f1 - accuracy model-index: - name: my_awsome_wnut_model results: - task: name: Token Classification type: token-classification dataset: name: wnut_17 type: wnut_17 args: wnut_17 metrics: - name: Precision type: precision value: 0.48464163822525597 - name: Recall type: recall value: 0.2632066728452271 - name: F1 type: f1 value: 0.3411411411411412 - name: Accuracy type: accuracy value: 0.9386088666581164 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awsome_wnut_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.2858 - Precision: 0.4846 - Recall: 0.2632 - F1: 0.3411 - Accuracy: 0.9386 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2976 | 0.3873 | 0.1974 | 0.2615 | 0.9352 | | No log | 2.0 | 426 | 0.2858 | 0.4846 | 0.2632 | 0.3411 | 0.9386 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cpu - Datasets 2.1.0 - Tokenizers 0.12.1
DeepPavlov/bert-base-multilingual-cased-sentence
[ "pytorch", "jax", "bert", "feature-extraction", "multilingual", "arxiv:1704.05426", "arxiv:1809.05053", "arxiv:1908.10084", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
140
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: aliciatay/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
DeepPavlov/distilrubert-base-cased-conversational
[ "pytorch", "distilbert", "ru", "arxiv:2205.02340", "transformers" ]
null
{ "architectures": null, "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6,324
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9183870967741935 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7721 - Accuracy: 0.9184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2896 | 1.0 | 318 | 3.2890 | 0.7432 | | 2.6284 | 2.0 | 636 | 1.8756 | 0.8377 | | 1.5483 | 3.0 | 954 | 1.1572 | 0.8961 | | 1.015 | 4.0 | 1272 | 0.8573 | 0.9132 | | 0.7953 | 5.0 | 1590 | 0.7721 | 0.9184 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
DeepPavlov/distilrubert-tiny-cased-conversational
[ "pytorch", "distilbert", "ru", "arxiv:2205.02340", "transformers" ]
null
{ "architectures": null, "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5,993
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: mobilebert_add_GLUE_Experiment_logit_kd_stsb results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue config: stsb split: validation args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.04810618310275214 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_add_GLUE_Experiment_logit_kd_stsb This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 1.1407 - Pearson: 0.0533 - Spearmanr: 0.0481 - Combined Score: 0.0507 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 1.7607 | 1.0 | 45 | 1.2881 | 0.0340 | 0.0258 | 0.0299 | | 1.0763 | 2.0 | 90 | 1.1761 | 0.0478 | 0.0438 | 0.0458 | | 1.0466 | 3.0 | 135 | 1.1550 | 0.0509 | 0.0390 | 0.0450 | | 1.0685 | 4.0 | 180 | 1.1407 | 0.0533 | 0.0481 | 0.0507 | | 1.0449 | 5.0 | 225 | 1.1527 | 0.0562 | 0.0478 | 0.0520 | | 1.0303 | 6.0 | 270 | 1.2257 | 0.0580 | 0.0606 | 0.0593 | | 1.0006 | 7.0 | 315 | 1.2018 | 0.0711 | 0.0736 | 0.0724 | | 0.9661 | 8.0 | 360 | 1.2391 | 0.0716 | 0.0848 | 0.0782 | | 0.9524 | 9.0 | 405 | 1.2005 | 0.0795 | 0.0749 | 0.0772 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
DeepPavlov/rubert-base-cased-conversational
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17,362
null
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/protective-equipment-detection model-index: - name: keremberke/yolov8n-protective-equipment-detection results: - task: type: object-detection dataset: type: keremberke/protective-equipment-detection name: protective-equipment-detection split: validation metrics: - type: precision # since [email protected] is not available on hf.co/metrics value: 0.24713 # min: 0.0 - max: 1.0 name: [email protected](box) --- <div align="center"> <img width="640" alt="keremberke/yolov8n-protective-equipment-detection" src="https://huggingface.co/keremberke/yolov8n-protective-equipment-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['glove', 'goggles', 'helmet', 'mask', 'no_glove', 'no_goggles', 'no_helmet', 'no_mask', 'no_shoes', 'shoes'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8n-protective-equipment-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
DeepPavlov/rubert-base-cased-sentence
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1508.05326", "arxiv:1809.05053", "arxiv:1908.10084", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
46,991
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 238.97 +/- 10.10 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DeepPavlov/rubert-base-cased
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "arxiv:1905.07213", "transformers", "has_space" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
148,127
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: mobilebert_add_GLUE_Experiment_logit_kd_wnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE WNLI type: glue config: wnli split: validation args: wnli metrics: - name: Accuracy type: accuracy value: 0.5633802816901409 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_add_GLUE_Experiment_logit_kd_wnli This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3448 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3478 | 1.0 | 5 | 0.3460 | 0.5634 | | 0.3477 | 2.0 | 10 | 0.3480 | 0.4366 | | 0.3466 | 3.0 | 15 | 0.3459 | 0.5634 | | 0.3466 | 4.0 | 20 | 0.3448 | 0.5634 | | 0.3468 | 5.0 | 25 | 0.3451 | 0.5634 | | 0.3467 | 6.0 | 30 | 0.3461 | 0.5634 | | 0.3465 | 7.0 | 35 | 0.3465 | 0.5634 | | 0.3466 | 8.0 | 40 | 0.3466 | 0.5634 | | 0.3468 | 9.0 | 45 | 0.3457 | 0.5634 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
DeividasM/wav2vec2-large-xlsr-53-lithuanian
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "lt", "dataset:common_voice", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-01-29T09:53:46Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: mobilebert_add_GLUE_Experiment_logit_kd_mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue config: mnli split: validation_matched args: mnli metrics: - name: Accuracy type: accuracy value: 0.3295362082994304 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_add_GLUE_Experiment_logit_kd_mnli This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 1.7834 - Accuracy: 0.3295 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.8866 | 1.0 | 3068 | 1.7941 | 0.3274 | | 1.8864 | 2.0 | 6136 | 1.7939 | 0.3274 | | 1.8864 | 3.0 | 9204 | 1.7944 | 0.3274 | | 1.8864 | 4.0 | 12272 | 1.7940 | 0.3274 | | 1.8864 | 5.0 | 15340 | 1.7938 | 0.3274 | | 1.8864 | 6.0 | 18408 | 1.7940 | 0.3274 | | 1.8864 | 7.0 | 21476 | 1.7944 | 0.3274 | | 1.8864 | 8.0 | 24544 | 1.7939 | 0.3274 | | 1.8864 | 9.0 | 27612 | 1.7939 | 0.3274 | | 1.8864 | 10.0 | 30680 | 1.7940 | 0.3274 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
DeltaHub/adapter_t5-3b_cola
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: nandysoham/0-clustered results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nandysoham/0-clustered This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7128 - Train End Logits Accuracy: 0.8102 - Train Start Logits Accuracy: 0.7412 - Validation Loss: 0.9487 - Validation End Logits Accuracy: 0.7328 - Validation Start Logits Accuracy: 0.6397 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 326, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.0078 | 0.7312 | 0.6503 | 0.9262 | 0.7481 | 0.6382 | 0 | | 0.7128 | 0.8102 | 0.7412 | 0.9487 | 0.7328 | 0.6397 | 1 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
DeltaHub/adapter_t5-3b_mrpc
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2023-01-29T09:58:07Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: KoRiF/ppo-PyramidsTraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
DeltaHub/lora_t5-base_mrpc
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: nandysoham/1-clustered results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nandysoham/1-clustered This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7785 - Train End Logits Accuracy: 0.7917 - Train Start Logits Accuracy: 0.7264 - Validation Loss: 0.9514 - Validation End Logits Accuracy: 0.7734 - Validation Start Logits Accuracy: 0.7014 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 138, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.1245 | 0.6957 | 0.6322 | 0.9694 | 0.7590 | 0.6906 | 0 | | 0.7785 | 0.7917 | 0.7264 | 0.9514 | 0.7734 | 0.7014 | 1 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
Denilson/gbert-base-germaner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - tr license: apache-2.0 tags: - whisper - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small Turkish results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 tr type: mozilla-foundation/common_voice_11_0 config: tr split: test args: tr metrics: - name: Wer type: wer value: 16.318103103769815 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Turkish This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 tr dataset. It achieves the following results on the evaluation set: - Loss: 0.2860 - Wer: 16.3181 - Cer: 4.1450 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 20000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:| | 0.1563 | 1.0 | 2500 | 0.2524 | 19.8570 | 5.1738 | | 0.032 | 2.01 | 5000 | 0.2567 | 18.5627 | 4.7793 | | 0.013 | 3.01 | 7500 | 0.2637 | 17.7723 | 4.6664 | | 0.0057 | 4.02 | 10000 | 0.2703 | 17.0596 | 4.3662 | | 0.0012 | 5.02 | 12500 | 0.2696 | 17.8322 | 5.2286 | | 0.003 | 6.03 | 15000 | 0.2800 | 16.7200 | 4.2972 | | 0.0003 | 7.03 | 17500 | 0.2834 | 16.4091 | 4.2018 | | 0.0002 | 8.04 | 20000 | 0.2860 | 16.3181 | 4.1450 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2
Deniskin/essays_small_2000
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: nandysoham/3-clustered results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nandysoham/3-clustered This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6964 - Train End Logits Accuracy: 0.8127 - Train Start Logits Accuracy: 0.7775 - Validation Loss: 0.8781 - Validation End Logits Accuracy: 0.7537 - Validation Start Logits Accuracy: 0.7338 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 596, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.0127 | 0.7370 | 0.6921 | 0.8770 | 0.7496 | 0.7321 | 0 | | 0.6964 | 0.8127 | 0.7775 | 0.8781 | 0.7537 | 0.7338 | 1 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
Deniskin/gpt3_medium
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
52
2023-01-29T10:21:40Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: nandysoham/4-clustered results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nandysoham/4-clustered This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7439 - Train End Logits Accuracy: 0.7927 - Train Start Logits Accuracy: 0.7423 - Validation Loss: 0.9033 - Validation End Logits Accuracy: 0.7560 - Validation Start Logits Accuracy: 0.6784 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 436, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.0687 | 0.6972 | 0.6540 | 0.8900 | 0.7617 | 0.6762 | 0 | | 0.7439 | 0.7927 | 0.7423 | 0.9033 | 0.7560 | 0.6784 | 1 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
Denver/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-29T10:28:33Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: nandysoham/5-clustered results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nandysoham/5-clustered This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5941 - Train End Logits Accuracy: 0.8333 - Train Start Logits Accuracy: 0.7955 - Validation Loss: 0.8305 - Validation End Logits Accuracy: 0.7820 - Validation Start Logits Accuracy: 0.7556 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 132, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.9118 | 0.7405 | 0.7093 | 0.8196 | 0.7744 | 0.7556 | 0 | | 0.5941 | 0.8333 | 0.7955 | 0.8305 | 0.7820 | 0.7556 | 1 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
Dhruva/Interstellar
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: nandysoham/12-clustered results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nandysoham/12-clustered This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6856 - Train End Logits Accuracy: 0.8145 - Train Start Logits Accuracy: 0.7542 - Validation Loss: 0.8791 - Validation End Logits Accuracy: 0.7585 - Validation Start Logits Accuracy: 0.7096 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 632, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.9975 | 0.7354 | 0.6632 | 0.8689 | 0.7719 | 0.7048 | 0 | | 0.6856 | 0.8145 | 0.7542 | 0.8791 | 0.7585 | 0.7096 | 1 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-50
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - common_voice_11_0 metrics: - wer model-index: - name: openai/whisper-medium results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_11_0 type: common_voice_11_0 config: ga-IE split: test args: ga-IE metrics: - name: Wer type: wer value: 35.22067363530778 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai/whisper-medium This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 1.1422 - Wer: 35.2207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 7000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1137 | 4.02 | 1000 | 0.9072 | 40.0987 | | 0.0153 | 9.02 | 2000 | 1.0351 | 38.7631 | | 0.0042 | 14.01 | 3000 | 1.0507 | 36.4402 | | 0.0013 | 19.0 | 4000 | 1.0924 | 36.2660 | | 0.0003 | 23.02 | 5000 | 1.1422 | 35.2207 | | 0.0001 | 28.02 | 6000 | 1.1688 | 35.3368 | | 0.0001 | 33.01 | 7000 | 1.1768 | 35.5110 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-75
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- license: mit metrics: - accuracy --- # Model Card for noisy_human_cnn <!-- Provide a quick summary of what the model is/does. --> CNN with 2 input channels (Melspectrograms and deltas) of 5-second audio signals. # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Santiago Viquez, Ivan Padezhki - **Model type:** CNN for audio classification - **License:** MIT ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/santiviquez/noisy-human-recognition/ - **Demo [optional]:** [More Information Needed]
DoyyingFace/bert-asian-hate-tweets-concat-clean-with-unclean-valid
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: zinoubm/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # zinoubm/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0841 - Validation Loss: 0.1171 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 6297, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1838 | 0.1269 | 0 | | 0.1079 | 0.1171 | 1 | | 0.0841 | 0.1171 | 2 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
albert-base-v1
[ "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
38,156
2023-01-29T13:31:27Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: swinv2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2 This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-base-patch4-window12to16-192to256-22kto1k-ft) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.8.0 - Tokenizers 0.13.2
albert-base-v2
[ "pytorch", "tf", "jax", "rust", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4,785,283
2023-01-29T13:35:27Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="brouthen/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
albert-large-v2
[ "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26,792
2023-01-29T13:42:12Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1107.79 +/- 78.27 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
albert-xlarge-v1
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
341
null
Access to model sajinpgupta/smoke_detect is restricted and you are not in the authorized list. Visit https://huggingface.co/sajinpgupta/smoke_detect to ask for access.
albert-xlarge-v2
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,973
2023-01-29T13:47:06Z
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image inference: false language: - en --- # Rodent Diffusion 1.5 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. The **Rodent-Diffusion-1-5** checkpoint was created with a custom Stable Diffusion v1.4 model as the base. From the base model, small merges (0.1-0.3) were included from the models listed below. Some keywords may exist, but for the most part you don't need anything special. Files are located in the "Files and versions" tab. <a href="https://huggingface.co/NerdyRodent/rodent-diffusion-1-5/blob/main/rodent-diffusion-1.5.safetensors">Safetensors file</a> Models: - analogDiffusion - Knolling Case - RPGDiffusion - classicnegative - cuteRich - inkpunk - evoartMj4 - dreamshaper - deliberate # Examples <img src="https://huggingface.co/NerdyRodent/rodent-diffusion-1-5/resolve/main/00806-Professional%2C_full-colour%2C_HD_digital_portrait_photo_of_a_hipster._Detailed%2C_intricate_hair%2C_high_definition._Focused%2C_crisp%2C_cl_3642035934_Euler%20a.png" width="30%"/> <sub>Professional, full-colour, HD digital portrait photo of a hipster. Detailed, intricate hair, high definition. Focused, crisp, clear and sharp. Ultra-realistic cinematic film still. taken with the Canon m50, 50mm focal. pastel shades AND professional photo of a hipster with vivid, vibrant earthy tones. 1960s Technicolor 16mm celluloid film look. Coffee bar in the background. Decaf latte. Negative prompt: blurry, smudge, smear, painting, anime, sketch, doodle, illustration, drawing Steps: 42, Sampler: Euler a, CFG scale: 5.25, Seed: 3642035934, Size: 512x640, Denoising strength: 0.666, Hires upscale: 1.689, Hires upscaler: Latent (bicubic antialiased) </sub> <img src="https://huggingface.co/NerdyRodent/rodent-diffusion-1-5/resolve/main/Rodent.png" width="30%"/> <sub>Professional, full-colour, HD digital portrait photo of a humanoid rat. Detailed, intricate hair, high definition. Focused, crisp, clear and sharp. Ultra-realistic cinematic film still. taken with the Canon m50, 50mm focal. pastel shades AND professional photo of a rodent druid wearing amazing armour. Vibrant earthy tones. 1960s Technicolor 16mm celluloid film look. Gothic castle background. Negative prompt: blurry, smudge, smear, painting, anime, sketch, doodle, illustration, drawing Steps: 42, Sampler: Euler a, CFG scale: 5.25, Seed: 2537406181, Size: 512x640, Denoising strength: 0.666, Hires upscale: 1.689, Hires upscaler: Latent (bicubic antialiased) </sub> <img src="https://huggingface.co/NerdyRodent/rodent-diffusion-1-5/resolve/main/00827-Amazing_painting_of_a_stunning_African_woman._Incredible_hairstyle%2C_high_definition._Focused%2C_crisp%2C_clear_and_sharp._Ultra-real_3784463460_Euler%20a.png" width="30%"/> <sub> Amazing painting of a stunning African woman. Incredible hairstyle, high definition. Focused, crisp, clear and sharp. Ultra-realistic. vibrant colours. AND matte portrait painting, cute African lady from the future. Vibrant brush strokes. oil on canvas, realism, acrylic impressionism neo-science fiction aesthetic with fantasy undertones mixed to create a warm feeling. 80's look and feel Negative prompt: 3d, render, blurry, smudge, smear, photo Steps: 42, Sampler: Euler a, CFG scale: 5.25, Seed: 3784463462, Size: 512x640, Denoising strength: 0.666, Hires upscale: 1.689, Hires upscaler: Latent (bicubic antialiased) </sub> <img src="https://huggingface.co/NerdyRodent/rodent-diffusion-1-5/resolve/main/00841-Anime_style_painting_of_a_Tokyo_street._Calm_and_peaceful._Relaxing._Incredible_definition_and_detail._Crisp%2C_clear_and_sharp_fo_2306894277_Euler%20a.png" width="30%"/> <sub>Anime style painting of a Tokyo street. Calm and peaceful. Relaxing. Incredible definition and detail. Crisp, clear and sharp focus. AND Anime inspired cinematic film still from the future the depicts a serene street during golden hour. Cel shading. Pastel shades and chilled vibes. Negative prompt: 3d, render, blurry, smudge, smear, photo Steps: 42, Sampler: Euler a, CFG scale: 5.25, Seed: 2306894277, Size: 512x640, Denoising strength: 0.666, Hires upscale: 1.689, Hires upscaler: Latent (bicubic antialiased) </sub> <img src="https://huggingface.co/NerdyRodent/rodent-diffusion-1-5/resolve/main/00849-Matte_painting_of_a_cat%2C_psychedelic_fractal_fur%2C_illusion%2C_ethereal_AND_oil_painting_of_a_surreal_cat_with_wild%2C_human-like_eye_2534465260_Euler%20a.png" width="30%"/> <sub>Matte painting of a cat, psychedelic fractal fur, illusion, ethereal AND oil painting of a surreal cat with wild, human-like eyes and a massive grin Negative prompt: 3d, render, blurry, smudge, smear, photo Steps: 42, Sampler: Euler a, CFG scale: 5.25, Seed: 2534465260, Size: 512x640, Denoising strength: 0.666, Hires upscale: 1.689, Hires upscaler: Latent (bicubic antialiased) </sub> Due to the strange licence mix, this model is for personal use only though I am working on an update with less restrictions. ## Original Stable Diffusion Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
albert-xxlarge-v2
[ "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42,640
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.6438356041908264 --- # rare-puppers Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Medusa tattoo ![Medusa tattoo](images/Medusa_tattoo.jpg) #### Roses tattoo ![Roses tattoo](images/Roses_tattoo.jpg) #### Skull tattoo ![Skull tattoo](images/Skull_tattoo.jpg) #### Tribal tattoo ![Tribal tattoo](images/Tribal_tattoo.jpg) #### Viking tattoo ![Viking tattoo](images/Viking_tattoo.jpg)
bert-base-cased-finetuned-mrpc
[ "pytorch", "tf", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11,644
2023-01-29T14:04:38Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="aliciatay/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
bert-base-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,621,271
2023-01-29T14:06:25Z
--- license: creativeml-openrail-m base_model: /root/autodl-tmp/sd_weights/models--runwayml--stable-diffusion-v1-5/snapshots/889b629140e71758e1e0006e355c331a5744b4bf tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - jianleo/lora_ruhua_sd_05k These are LoRA adaption weights for /root/autodl-tmp/sd_weights/models--runwayml--stable-diffusion-v1-5/snapshots/889b629140e71758e1e0006e355c331a5744b4bf. The weights were trained on a photo of rha woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
bert-base-chinese
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "zh", "arxiv:1810.04805", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3,377,486
2023-01-29T14:08:29Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- Stable Diffusion v1.5 trained model on Oscar Health avatar pictures
bert-base-german-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "de", "transformers", "exbert", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
175,983
null
--- tags: - generated_from_trainer datasets: - silicone metrics: - accuracy model-index: - name: twitter-roberta-base-sentiment results: - task: name: Text Classification type: text-classification dataset: name: silicone type: silicone config: swda split: test args: swda metrics: - name: Accuracy type: accuracy value: 0.7258658806190126 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-sentiment This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on the silicone dataset. It achieves the following results on the evaluation set: - Loss: 0.9158 - Accuracy: 0.7259 - Micro-precision: 0.7259 - Micro-recall: 0.7259 - Micro-f1: 0.7259 - Macro-precision: 0.3430 - Macro-recall: 0.3267 - Macro-f1: 0.3195 - Weighted-precision: 0.6825 - Weighted-recall: 0.7259 - Weighted-f1: 0.6938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Micro-precision | Micro-recall | Micro-f1 | Macro-precision | Macro-recall | Macro-f1 | Weighted-precision | Weighted-recall | Weighted-f1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:| | 0.9087 | 1.0 | 2980 | 0.9158 | 0.7259 | 0.7259 | 0.7259 | 0.7259 | 0.3430 | 0.3267 | 0.3195 | 0.6825 | 0.7259 | 0.6938 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
bert-base-german-dbmdz-uncased
[ "pytorch", "jax", "safetensors", "bert", "fill-mask", "de", "transformers", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
68,305
2023-01-29T14:18:08Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.67 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="brouthen/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
bert-base-multilingual-uncased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
328,585
2023-01-29T14:24:10Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: qtaxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="aliciatay/qtaxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
bert-base-uncased
[ "pytorch", "tf", "jax", "rust", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
59,663,489
2023-01-29T14:25:33Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1721.92 +/- 403.54 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
bert-large-cased-whole-word-masking-finetuned-squad
[ "pytorch", "tf", "jax", "rust", "safetensors", "bert", "question-answering", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,214
2023-01-29T14:26:11Z
--- license: openrail++ tags: - stable-diffusion - text-to-image pinned: true --- # Model Card for flex-diffusion-2-1 <!-- Provide a quick summary of what the model is/does. [Optional] --> stable-diffusion-2-1 (stabilityai/stable-diffusion-2-1) finetuned with different aspect ratios. ## TLDR: ### There are 2 models in this repo: - One based on stable-diffusion-2-1 (stabilityai/stable-diffusion-2-1) finetuned for 6k steps. - One based on stable-diffusion-2-base (stabilityai/stable-diffusion-2-base) finetuned for 6k steps, on the same dataset. For usage, see - [How to Get Started with the Model](#how-to-get-started-with-the-model) ### It aims to solve the following issues: 1. Generated images looks like they are cropped from a larger image. 2. Generating non-square images creates weird results, due to the model being trained on square images. Examples: | resolution | model | stable diffusion | flex diffusion | |:---------------:|:-------:|:----------------------------:|:-----------------------------:| | 576x1024 (9:16) | v2-1 | ![img](imgs/21-576-1024.png) | ![img](imgs/21f-576-1024.png) | | 576x1024 (9:16) | v2-base | ![img](imgs/2b-576-1024.png) | ![img](imgs/2bf-576-1024.png) | | 1024x576 (16:9) | v2-1 | ![img](imgs/21-1024-576.png) | ![img](imgs/21f-1024-576.png) | | 1024x576 (16:9) | v2-base | ![img](imgs/2b-1024-576.png) | ![img](imgs/2bf-1024-576.png) | ### Limitations: 1. It's trained on a small dataset, so it's improvements may be limited. 2. For each aspect ratio, it's trained on only a fixed resolution. So it may not be able to generate images of different resolutions. For 1:1 aspect ratio, it's fine-tuned at 512x512, although flex-diffusion-2-1 was last finetuned at 768x768. ### Potential improvements: 1. Train on a larger dataset. 2. Train on different resolutions even for the same aspect ratio. 3. Train on specific aspect ratios, instead of a range of aspect ratios. # Table of Contents - [Model Card for flex-diffusion-2-1](#model-card-for--model_id-) - [Table of Contents](#table-of-contents) - [Table of Contents](#table-of-contents-1) - [Model Details](#model-details) - [Model Description](#model-description) - [Uses](#uses) - [Direct Use](#direct-use) - [Downstream Use [Optional]](#downstream-use-optional) - [Out-of-Scope Use](#out-of-scope-use) - [Bias, Risks, and Limitations](#bias-risks-and-limitations) - [Recommendations](#recommendations) - [Training Details](#training-details) - [Training Data](#training-data) - [Training Procedure](#training-procedure) - [Preprocessing](#preprocessing) - [Speeds, Sizes, Times](#speeds-sizes-times) - [Evaluation](#evaluation) - [Testing Data, Factors & Metrics](#testing-data-factors--metrics) - [Testing Data](#testing-data) - [Factors](#factors) - [Metrics](#metrics) - [Results](#results) - [Model Examination](#model-examination) - [Environmental Impact](#environmental-impact) - [Technical Specifications [optional]](#technical-specifications-optional) - [Model Architecture and Objective](#model-architecture-and-objective) - [Compute Infrastructure](#compute-infrastructure) - [Hardware](#hardware) - [Software](#software) - [Citation](#citation) - [Glossary [optional]](#glossary-optional) - [More Information [optional]](#more-information-optional) - [Model Card Authors [optional]](#model-card-authors-optional) - [Model Card Contact](#model-card-contact) - [How to Get Started with the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description <!-- Provide a longer summary of what this model is/does. --> stable-diffusion-2-1 (stabilityai/stable-diffusion-2-1) finetuned for dynamic aspect ratios. finetuned resolutions: | | width | height | aspect ratio | |---:|--------:|---------:|:---------------| | 0 | 512 | 1024 | 1:2 | | 1 | 576 | 1024 | 9:16 | | 2 | 576 | 960 | 3:5 | | 3 | 640 | 1024 | 5:8 | | 4 | 512 | 768 | 2:3 | | 5 | 640 | 896 | 5:7 | | 6 | 576 | 768 | 3:4 | | 7 | 512 | 640 | 4:5 | | 8 | 640 | 768 | 5:6 | | 9 | 640 | 704 | 10:11 | | 10 | 512 | 512 | 1:1 | | 11 | 704 | 640 | 11:10 | | 12 | 768 | 640 | 6:5 | | 13 | 640 | 512 | 5:4 | | 14 | 768 | 576 | 4:3 | | 15 | 896 | 640 | 7:5 | | 16 | 768 | 512 | 3:2 | | 17 | 1024 | 640 | 8:5 | | 18 | 960 | 576 | 5:3 | | 19 | 1024 | 576 | 16:9 | | 20 | 1024 | 512 | 2:1 | - **Developed by:** Jonathan Chang - **Model type:** Diffusion-based text-to-image generation model - **Language(s)**: English - **License:** creativeml-openrail-m - **Parent Model:** https://huggingface.co/stabilityai/stable-diffusion-2-1 - **Resources for more information:** More information needed # Uses - see https://huggingface.co/stabilityai/stable-diffusion-2-1 # Training Details ## Training Data - LAION aesthetic dataset, subset of it with 6+ rating - https://laion.ai/blog/laion-aesthetics/ - https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus - I only used a small portion of that, see [Preprocessing](#preprocessing) - most common aspect ratios in the dataset (before preprocessing) | | aspect_ratio | counts | |---:|:---------------|---------:| | 0 | 1:1 | 154727 | | 1 | 3:2 | 119615 | | 2 | 2:3 | 61197 | | 3 | 4:3 | 52276 | | 4 | 16:9 | 38862 | | 5 | 400:267 | 21893 | | 6 | 3:4 | 16893 | | 7 | 8:5 | 16258 | | 8 | 4:5 | 15684 | | 9 | 6:5 | 12228 | | 10 | 1000:667 | 12097 | | 11 | 2:1 | 11006 | | 12 | 800:533 | 10259 | | 13 | 5:4 | 9753 | | 14 | 500:333 | 9700 | | 15 | 250:167 | 9114 | | 16 | 5:3 | 8460 | | 17 | 200:133 | 7832 | | 18 | 1024:683 | 7176 | | 19 | 11:10 | 6470 | - predefined aspect ratios | | width | height | aspect ratio | |---:|--------:|---------:|:---------------| | 0 | 512 | 1024 | 1:2 | | 1 | 576 | 1024 | 9:16 | | 2 | 576 | 960 | 3:5 | | 3 | 640 | 1024 | 5:8 | | 4 | 512 | 768 | 2:3 | | 5 | 640 | 896 | 5:7 | | 6 | 576 | 768 | 3:4 | | 7 | 512 | 640 | 4:5 | | 8 | 640 | 768 | 5:6 | | 9 | 640 | 704 | 10:11 | | 10 | 512 | 512 | 1:1 | | 11 | 704 | 640 | 11:10 | | 12 | 768 | 640 | 6:5 | | 13 | 640 | 512 | 5:4 | | 14 | 768 | 576 | 4:3 | | 15 | 896 | 640 | 7:5 | | 16 | 768 | 512 | 3:2 | | 17 | 1024 | 640 | 8:5 | | 18 | 960 | 576 | 5:3 | | 19 | 1024 | 576 | 16:9 | | 20 | 1024 | 512 | 2:1 | ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing 1. download files with url &amp; caption from https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus - I only used the first file `train-00000-of-00007-29aec9150af50f9f.parquet` 2. use img2dataset to convert to webdataset - https://github.com/rom1504/img2dataset - I put train-00000-of-00007-29aec9150af50f9f.parquet in a folder called `first-file` - the output folder is `/mnt/aesthetics6plus`, change this to your own folder ```bash echo INPUT_FOLDER=first-file echo OUTPUT_FOLDER=/mnt/aesthetics6plus img2dataset --url_list $INPUT_FOLDER --input_format "parquet"\ --url_col "URL" --caption_col "TEXT" --output_format webdataset\ --output_folder $OUTPUT_FOLDER --processes_count 3 --thread_count 6 --image_size 1024 --resize_only_if_bigger --resize_mode=keep_ratio_largest \ --save_additional_columns '["WIDTH","HEIGHT","punsafe","similarity"]' --enable_wandb True ``` 3. The data-loading code will do preprocessing on the fly, so no need to do anything else. But it's not optimized for speed, the GPU utilization fluctuates between 80% and 100%. And it's not written for multi-GPU training, so use it with caution. The code will do the following: - use webdataset to load the data - calculate the aspect ratio of each image - find the closest aspect ratio & it's associated resolution from the predefined resolutions: `argmin(abs(aspect_ratio - predefined_aspect_ratios))`. E.g. if the aspect ratio is 1:3, the closest resolution is 1:2. and it's associated resolution is 512x1024. - keeping the aspect ratio, resize the image such that it's larger or equal to the associated resolution on each side. E.g. resize to 512x(512*3) = 512x1536 - random crop the image to the associated resolution. E.g. crop to 512x1024 - if more than 10% of the image is lost in the cropping, discard this example. - batch examples by aspect ratio, so all examples in a batch have the same aspect ratio ### Speeds, Sizes, Times - Dataset size: 100k image-caption pairs, before filtering. - I didn't wait for the whole dataset to be downloaded, I copied the first 10 tar files and their index files to a new folder called `aesthetics6plus-small`, with 100k image-caption pairs in total. The full dataset is a lot bigger. - Hardware: 1 RTX3090 GPUs - Optimizer: 8bit Adam - Batch size: 32 - actual batch size: 2 - gradient_accumulation_steps: 16 - effective batch size: 32 - Learning rate: warmup to 2e-6 for 500 steps and then kept constant - Learning rate: 2e-6 - Training steps: 6k - Epoch size (approximate): 32 * 6k / 100k = 1.92 (not accounting for the filtering) - Each example is seen 1.92 times on average. - Training time: approximately 1 day ## Results More information needed # Model Card Authors Jonathan Chang # How to Get Started with the Model Use the code below to get started with the model. ```python from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler, UNet2DConditionModel def use_DPM_solver(pipe): pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) return pipe pipe = StableDiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-2-1", unet = UNet2DConditionModel.from_pretrained("ttj/flex-diffusion-2-1", subfolder="2-1/unet", torch_dtype=torch.float16), torch_dtype=torch.float16, ) # for v2-base, use the following line instead #pipe = StableDiffusionPipeline.from_pretrained( # "stabilityai/stable-diffusion-2-base", # unet = UNet2DConditionModel.from_pretrained("ttj/flex-diffusion-2-1", subfolder="2-base/unet", torch_dtype=torch.float16), # torch_dtype=torch.float16) pipe = use_DPM_solver(pipe).to("cuda") pipe = pipe.to("cuda") prompt = "a professional photograph of an astronaut riding a horse" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ```
bert-large-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
388,769
null
--- license: mit tags: - generated_from_trainer model-index: - name: simba-1.3b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # simba-1.3b This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 512 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
bert-large-uncased-whole-word-masking-finetuned-squad
[ "pytorch", "tf", "jax", "safetensors", "bert", "question-answering", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
480,510
2023-01-29T14:31:50Z
--- license: creativeml-openrail-m base_model: /root/autodl-tmp/sd_weights/models--runwayml--stable-diffusion-v1-5/snapshots/889b629140e71758e1e0006e355c331a5744b4bf tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - jianleo/lora_ruhua_sd_1k These are LoRA adaption weights for /root/autodl-tmp/sd_weights/models--runwayml--stable-diffusion-v1-5/snapshots/889b629140e71758e1e0006e355c331a5744b4bf. The weights were trained on a photo of rha woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
bert-large-uncased-whole-word-masking
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
76,685
2023-01-29T14:32:10Z
--- language: de datasets: - Short-Answer-Feedback/saf_legal_domain_german tags: - generated_from_trainer widget: - text: "Antwort: Wird sich nicht an die Auflagen gehalten (unzureichende Eigenbemühung), droht eine Sperrzeit von 1-2 Wochen. Dadurch wird für die genannte zeit keine Leistung gezahlt, die Anspruchsdauer vermindert sich insgesamt. Bei wichtigen Gründen wird die Sperrzeit nicht verordnet. Lösung: Merkblatt 1 für Arbeitslose, S. 22: Erbringen Sie die Pflichten im Zusammenhang mit den Eigenbemühungen nicht, nicht rechtzeitig oder nicht vollständig, tritt eine Sperrzeit (0,75 p) ein. Merkblatt 1 für Arbeitslose, S. 55: Die Dauer einer Sperrzeit bei unzureichenden Eigenbemühungen beträgt zwei Wochen. (0,25 p). Frage: Mit welcher Folge und welcher Dauer müssen Sie rechnen, wenn Sie Ihre notwendigen Eigenbemühungen nicht rechtzeitig oder nicht vollständig erfüllen?" --- # mbart-score-finetuned-saf-legal-domain This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the [saf_legal_domain_german](https://huggingface.co/datasets/Short-Answer-Feedback/saf_legal_domain_german) dataset for Short Answer Feedback (SAF). ## Model description This model was built on top of [mBART](https://arxiv.org/abs/2001.08210), which is a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages. It expects inputs in the following format: ``` Antwort: [answer] Lösung: [reference_answer] Frage: [question] ``` In the example above, `[answer]`, `[reference_answer]` and `[question]` should be replaced by the provided answer, the reference answer and the question to which they refer, respectively. The outputs are formatted as follows: ``` [score] Feedback: [feedback] ``` Hence, `[score]` will be a numeric value between 0 and 1, while `[feedback]` will be the textual feedback generated by the model according to the given answer. ## Intended uses & limitations This model is intended to be used for Short Answer Feedback generation in the domain of the German social law. Thus, it is not expected to have particularly good performance on sets of questions and answers out of this scope. It is important to acknowledge that the model underperforms when a question that was not seen during training is given as input for inference. In particular, it tends to classify most answers as being correct and does not provide relevant feedback in such cases. Nevertheless, this limitation could be partially overcome by extending the dataset with the desired question (and associated answers) and fine-tuning it for a few epochs on the new data. ## Training and evaluation data As mentioned previously, the model was trained on the [saf_legal_domain_german](https://huggingface.co/datasets/Short-Answer-Feedback/saf_legal_domain_german) dataset, which is divided into the following splits. | Split | Number of examples | | --------------------- | ------------------ | | train | 1596 | | validation | 400 | | test_unseen_answers | 221 | | test_unseen_questions | 275 | Evaluation was performed on the `test_unseen_answers` and `test_unseen_questions` splits. ## Training procedure The [Trainer API](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainer) was used to fine-tune the model. The code utilized for pre-processing and training was mostly adapted from the [summarization script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) made available by HuggingFace. Training was completed in a little over 1 hour on a GPU on Google Colab. ### Training hyperparameters The following hyperparameters were used during training: - num_epochs: 9 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - learning_rate: 6e-05 - lr_scheduler_type: linear - train_batch_size: 1 - gradient_accumulation_steps: 4 - eval_batch_size: 4 - mixed_precision_training: Native AMP - seed: 42 ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2 ## Evaluation results The generated feedback was evaluated through means of the [SacreBLEU](https://huggingface.co/spaces/evaluate-metric/sacrebleu), [ROUGE-2](https://huggingface.co/spaces/evaluate-metric/rouge), [METEOR](https://huggingface.co/spaces/evaluate-metric/meteor), [BERTScore](https://huggingface.co/spaces/evaluate-metric/bertscore) metrics from HuggingFace, while the [Root Mean Squared Error](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error) loss from scikit-learn was used for evaluation of the predicted scores in relation to the golden label scores. The following results were achieved. | Split | SacreBLEU | ROUGE-2 | METEOR | BERTScore | RMSE | | --------------------- | :-------: | :-----: | :----: | :-------: | :---: | | test_unseen_answers | 39.4 | 42.3 | 54.3 | 52.6 | 0.190 | | test_unseen_questions | 2.8 | 5.0 | 17.9 | 10.7 | 0.317 | The script used to compute these metrics and perform evaluation can be found in the `evaluation.py` file in this repository. ## Usage The example below shows how the model can be applied to generate feedback to a given answer. ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained('Short-Answer-Feedback/mbart-score-finetuned-saf-legal-domain') tokenizer = AutoTokenizer.from_pretrained('Short-Answer-Feedback/mbart-score-finetuned-saf-legal-domain') example_input = 'Antwort: Wird sich nicht an die Auflagen gehalten (unzureichende Eigenbemühung), droht eine Sperrzeit von 1-2 Wochen. Dadurch wird für die genannte zeit keine Leistung gezahlt, die Anspruchsdauer vermindert sich insgesamt. Bei wichtigen Gründen wird die Sperrzeit nicht verordnet. Lösung: Merkblatt 1 für Arbeitslose, S. 22: Erbringen Sie die Pflichten im Zusammenhang mit den Eigenbemühungen nicht, nicht rechtzeitig oder nicht vollständig, tritt eine Sperrzeit (0,75 p) ein. Merkblatt 1 für Arbeitslose, S. 55: Die Dauer einer Sperrzeit bei unzureichenden Eigenbemühungen beträgt zwei Wochen. (0,25 p). Frage: Mit welcher Folge und welcher Dauer müssen Sie rechnen, wenn Sie Ihre notwendigen Eigenbemühungen nicht rechtzeitig oder nicht vollständig erfüllen?' inputs = tokenizer(example_input, max_length=256, padding='max_length', truncation=True, return_tensors='pt') generated_tokens = model.generate( inputs['input_ids'], attention_mask=inputs['attention_mask'], max_length=128 ) output = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0] ``` The output produced by the model then looks as follows: ``` 0.75 Feedback: Es ist richtig, dass Sie mit einer Sperrzeit rechnen müssen, in der Sie keine Leistung bekommen. Die gesetzlich vorgesehene Sperrzeit bei unzureichenden Eigenbemühungen beträgt jedoch zwei Wochen. ```
bert-large-uncased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,058,496
2023-01-29T14:33:36Z
--- tags: - generated_from_trainer model-index: - name: speller-t5-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speller-t5-ds This model is a fine-tuned version of [sberbank-ai/ruT5-base](https://huggingface.co/sberbank-ai/ruT5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
distilbert-base-cased-distilled-squad
[ "pytorch", "tf", "rust", "safetensors", "openvino", "distilbert", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "model-index", "autotrain_compatible", "has_space" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
257,745
2023-01-29T14:46:06Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
distilbert-base-multilingual-cased
[ "pytorch", "tf", "onnx", "safetensors", "distilbert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,339,633
2023-01-29T14:51:31Z
--- tags: - generated_from_keras_callback model-index: - name: layoutlm-funsd-tf results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-funsd-tf This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2451 - Validation Loss: 0.7339 - Train Overall Precision: 0.7247 - Train Overall Recall: 0.8058 - Train Overall F1: 0.7631 - Train Overall Accuracy: 0.7976 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Train Overall Precision | Train Overall Recall | Train Overall F1 | Train Overall Accuracy | Epoch | |:----------:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:----------------------:|:-----:| | 1.6758 | 1.4035 | 0.2734 | 0.3191 | 0.2945 | 0.5113 | 0 | | 1.1350 | 0.8802 | 0.5626 | 0.6538 | 0.6048 | 0.7313 | 1 | | 0.7417 | 0.6927 | 0.6604 | 0.7602 | 0.7068 | 0.7805 | 2 | | 0.5568 | 0.6715 | 0.7039 | 0.7501 | 0.7263 | 0.7823 | 3 | | 0.4493 | 0.6464 | 0.7073 | 0.7782 | 0.7410 | 0.7980 | 4 | | 0.3732 | 0.6112 | 0.7108 | 0.7858 | 0.7464 | 0.8182 | 5 | | 0.2949 | 0.6429 | 0.7123 | 0.7988 | 0.7531 | 0.8070 | 6 | | 0.2451 | 0.7339 | 0.7247 | 0.8058 | 0.7631 | 0.7976 | 7 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
distilbert-base-uncased-finetuned-sst-2-english
[ "pytorch", "tf", "rust", "safetensors", "distilbert", "text-classification", "en", "dataset:sst2", "dataset:glue", "arxiv:1910.01108", "doi:10.57967/hf/0181", "transformers", "license:apache-2.0", "model-index", "has_space" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3,060,704
2023-01-29T15:00:16Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: BachNgoH/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
13on/kw2t-wishes
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2023-01-29T17:49:20Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="eldraco/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
61birds/distilbert-base-uncased-finetuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-29T19:17:48Z
--- language: es thumbnail: https://i.imgur.com/jgBdimh.png license: apache-2.0 duplicated_from: mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es --- # BETO (Spanish BERT) + Spanish SQuAD2.0 + distillation using 'bert-base-multilingual-cased' as teacher This model is a fine-tuned on [SQuAD-es-v2.0](https://github.com/ccasimiro88/TranslateAlignRetrieve) and **distilled** version of [BETO](https://github.com/dccuchile/beto) for **Q&A**. Distillation makes the model **smaller, faster, cheaper and lighter** than [bert-base-spanish-wwm-cased-finetuned-spa-squad2-es](https://github.com/huggingface/transformers/blob/master/model_cards/mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es/README.md) This model was fine-tuned on the same dataset but using **distillation** during the process as mentioned above (and one more train epoch). The **teacher model** for the distillation was `bert-base-multilingual-cased`. It is the same teacher used for `distilbert-base-multilingual-cased` AKA [**DistilmBERT**](https://github.com/huggingface/transformers/tree/master/examples/distillation) (on average is twice as fast as **mBERT-base**). ## Details of the downstream task (Q&A) - Dataset <details> [SQuAD-es-v2.0](https://github.com/ccasimiro88/TranslateAlignRetrieve) | Dataset | # Q&A | | ----------------------- | ----- | | SQuAD2.0 Train | 130 K | | SQuAD2.0-es-v2.0 | 111 K | | SQuAD2.0 Dev | 12 K | | SQuAD-es-v2.0-small Dev | 69 K | </details> ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash !export SQUAD_DIR=/path/to/squad-v2_spanish \ && python transformers/examples/distillation/run_squad_w_distillation.py \ --model_type bert \ --model_name_or_path dccuchile/bert-base-spanish-wwm-cased \ --teacher_type bert \ --teacher_name_or_path bert-base-multilingual-cased \ --do_train \ --do_eval \ --do_lower_case \ --train_file $SQUAD_DIR/train-v2.json \ --predict_file $SQUAD_DIR/dev-v2.json \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 5.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/model_output \ --save_steps 5000 \ --threads 4 \ --version_2_with_negative ``` ## Results: TBA ### Model in action Fast usage with **pipelines**: ```python from transformers import * # Important!: By now the QA pipeline is not compatible with fast tokenizer, but they are working on it. So that pass the object to the tokenizer {"use_fast": False} as in the following example: nlp = pipeline( 'question-answering', model='mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es', tokenizer=( 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es', {"use_fast": False} ) ) nlp( { 'question': '¿Para qué lenguaje está trabajando?', 'context': 'Manuel Romero está colaborando activamente con huggingface/transformers ' + 'para traer el poder de las últimas técnicas de procesamiento de lenguaje natural al idioma español' } ) # Output: {'answer': 'español', 'end': 169, 'score': 0.67530957344621, 'start': 163} ``` Play with this model and ```pipelines``` in a Colab: <a href="https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/Using_Spanish_BERT_fine_tuned_for_Q%26A_pipelines.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a> <details> 1. Set the context and ask some questions: ![Set context and questions](https://media.giphy.com/media/mCIaBpfN0LQcuzkA2F/giphy.gif) 2. Run predictions: ![Run the model](https://media.giphy.com/media/WT453aptcbCP7hxWTZ/giphy.gif) </details> More about ``` Huggingface pipelines```? check this Colab out: <a href="https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/Huggingface_pipelines_demo.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
ASCCCCCCCC/PENGMENGJIE
[ "license:apache-2.0" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-29T22:07:03Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: mobilebert_sa_GLUE_Experiment_logit_kd_mrpc_256 results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.6911764705882353 - name: F1 type: f1 value: 0.7967741935483871 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_logit_kd_mrpc_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.4961 - Accuracy: 0.6912 - F1: 0.7968 - Combined Score: 0.7440 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6315 | 1.0 | 29 | 0.5588 | 0.6838 | 0.8122 | 0.7480 | | 0.6098 | 2.0 | 58 | 0.5552 | 0.6838 | 0.8122 | 0.7480 | | 0.6099 | 3.0 | 87 | 0.5544 | 0.6838 | 0.8122 | 0.7480 | | 0.6084 | 4.0 | 116 | 0.5541 | 0.6838 | 0.8122 | 0.7480 | | 0.603 | 5.0 | 145 | 0.5497 | 0.6838 | 0.8122 | 0.7480 | | 0.5758 | 6.0 | 174 | 0.5335 | 0.7059 | 0.8171 | 0.7615 | | 0.4984 | 7.0 | 203 | 0.4961 | 0.6912 | 0.7968 | 0.7440 | | 0.4329 | 8.0 | 232 | 0.5478 | 0.6814 | 0.7743 | 0.7278 | | 0.3876 | 9.0 | 261 | 0.5450 | 0.6838 | 0.7861 | 0.7349 | | 0.3286 | 10.0 | 290 | 0.5792 | 0.6814 | 0.7628 | 0.7221 | | 0.2833 | 11.0 | 319 | 0.5819 | 0.6446 | 0.7249 | 0.6847 | | 0.2611 | 12.0 | 348 | 0.6755 | 0.6936 | 0.7913 | 0.7425 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
AdapterHub/bert-base-uncased-pf-quail
[ "bert", "en", "dataset:quail", "arxiv:2104.08247", "adapter-transformers" ]
null
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2023-01-30T02:04:45Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="haanjack/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AdapterHub/bert-base-uncased-pf-squad
[ "bert", "en", "dataset:squad", "arxiv:2104.08247", "adapter-transformers", "question-answering", "adapterhub:qa/squad1" ]
question-answering
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="haanjack/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AdapterHub/roberta-base-pf-mit_movie_trivia
[ "roberta", "en", "arxiv:2104.08247", "adapter-transformers", "token-classification", "adapterhub:ner/mit_movie_trivia" ]
token-classification
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 498.70 +/- 3.90 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Aftabhussain/Tomato_Leaf_Classifier
[ "pytorch", "tensorboard", "vit", "image-classification", "transformers", "huggingpics", "model-index", "autotrain_compatible" ]
image-classification
{ "architectures": [ "ViTForImageClassification" ], "model_type": "vit", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
50
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: mobilebert_sa_GLUE_Experiment_logit_kd_stsb_128 results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue config: stsb split: validation args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.05629672306471203 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_logit_kd_stsb_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 1.1533 - Pearson: 0.0554 - Spearmanr: 0.0563 - Combined Score: 0.0558 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 2.5973 | 1.0 | 45 | 1.2342 | -0.0353 | -0.0325 | -0.0339 | | 1.0952 | 2.0 | 90 | 1.1740 | 0.0434 | 0.0419 | 0.0426 | | 1.0581 | 3.0 | 135 | 1.1533 | 0.0554 | 0.0563 | 0.0558 | | 1.0455 | 4.0 | 180 | 1.2131 | 0.0656 | 0.0690 | 0.0673 | | 0.9795 | 5.0 | 225 | 1.3883 | 0.0868 | 0.0858 | 0.0863 | | 0.9197 | 6.0 | 270 | 1.4141 | 0.1181 | 0.1148 | 0.1165 | | 0.8182 | 7.0 | 315 | 1.3460 | 0.1771 | 0.1853 | 0.1812 | | 0.6796 | 8.0 | 360 | 1.1577 | 0.2286 | 0.2340 | 0.2313 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
Aleksandar1932/distilgpt2-rock
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="SinusCosinus/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AlexMaclean/sentence-compression
[ "pytorch", "distilbert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "DistilBertForTokenClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: my-awesome-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # my-awesome-model This model is a fine-tuned version of [google/tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.25.1 - TensorFlow 2.9.2 - Tokenizers 0.13.2
Alireza1044/albert-base-v2-wnli
[ "pytorch", "albert", "text-classification", "en", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
164
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - allenai/nllb --- # Ramos-Ramos/xlm-roberta-base-en-tl-4-1000 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('Ramos-Ramos/xlm-roberta-base-en-tl-4-1000') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('Ramos-Ramos/xlm-roberta-base-en-tl-4-1000') model = AutoModel.from_pretrained('Ramos-Ramos/xlm-roberta-base-en-tl-4-1000') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Ramos-Ramos/xlm-roberta-base-en-tl-4-1000) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 12406 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MSELoss.MSELoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
AndrewMcDowell/wav2vec2-xls-r-300m-japanese
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "ja", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: - en - ru - multilingual license: cc-by-sa-4.0 tags: - translation - wmt20 widget: - text: "Сахалинская кайнозойская складчатая область разделяется на Восточную и Западную зоны, разделённые Центрально-Сахалинским грабеном." - text: "Существует несколько мнений о его точном месторасположении." - text: "Крупный научно-образовательный центр, в котором обучается свыше ста тысяч студентов." --- # Fairseq Ru-En NMT WMT20 MLQE This repository contains the Russian-English model trained with the [fairseq toolkit](https://github.com/pytorch/fairseq) that was used to produce translations used in the WMT21 shared task on quality estimation (QE) on the [MLQE dataset](https://github.com/facebookresearch/mlqe). The checkpoint was converted from the original fairseq checkpoint available [here](https://github.com/facebookresearch/mlqe/tree/master/nmt_models) using the `convert_fsmt_original_pytorch_checkpoint_to_pytorch.py` script from the 🤗 Transformers library (v4.26.0). Please refer to the repositories linked above for additional information on usage, parameters and training data.
AndrewNLP/redditDepressionPropensityClassifiers
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="arenbeglaryan/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AnonymousSub/cline-emanuals-techqa
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
# Joint Pruning, Quantization and Distillation for BERT-large/SQuADv1.1 ## Setup ```bash git clone https://github.com/vuiseng9/optimum-intel cd optimum-intel pip install -e .[openvino,nncf] cd examples/openvino/question-answering/ pip install -r requirements.txt pip install wandb # optional ``` ## Run ```bash NNCFCFG=/path/to/openvino_config.json MASTER_PORT=<PORTID> RUNID=<RUN_IDENTIFIER> OUTDIR=/path/to/saved_model NEPOCH=30 python -m torch.distributed.launch \ --nproc_per_node 4 \ --master_port $MASTER_PORT \ run_qa.py \ --model_name_or_path bert-large-uncased-whole-word-masking \ --dataset_name squad \ --teacher_model_or_path bert-large-uncased-whole-word-masking-finetuned-squad \ --distillation_weight 0.9 \ --do_eval \ --fp16 \ --do_train \ --learning_rate 3e-5 \ --num_train_epochs $NEPOCH \ --per_device_eval_batch_size 128 \ --per_device_train_batch_size 16 \ --max_seq_length 384 \ --doc_stride 128 \ --logging_steps 1 \ --evaluation_strategy steps \ --eval_steps 250 \ --save_steps 500 \ --overwrite_output_dir \ --run_name $RUNID \ --output_dir $OUTDIR \ --nncf_compression_config $NNCFCFG ``` ### Reference Results ``` Global Step: 39500 F1: 92.482 EM: 86.594 Structured Sparsity (linear): 61.70% Model Sparsity: 55.82% ```
AnonymousSubmission/pretrained-model-1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('AliBuildsAI/sd-class-butterflies-32') image = pipeline().images[0] image ```
AnthonyNelson/DialoGPT-small-ricksanchez
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2023-01-30T22:36:24Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 614.00 +/- 99.64 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga selvino -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga selvino -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga selvino ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Anthos23/FS-distilroberta-fine-tuned
[ "pytorch", "roberta", "text-classification", "transformers", "has_space" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- language: - en tags: - Token Classification widget: - text: >- The FDA approved deucravacitinib for moderate-to-severe plaque psoriasis in adult patients. example_title: example 1 metrics: - accuracy --- This is a model to detect treatment and disease mentions in texts from health domains. The dataset used for training consists of [PubMed's abstracts](https://www.ncbi.nlm.nih.gov/guide/howto/dwn-records/) and [tweets](https://developer.twitter.com/en/use-cases/do-research) with disease mentions, which are publicly available. It has been semi-automatically labeled with a set of regex rules made ad hoc to find treatment-disease links. The label ```t``` is used for treatment (medication, procedures, etc.) and ```d``` for disease mentions. The current F1 score (Seqeval) is 0.91
Anthos23/sentiment-roberta-large-english-finetuned-sentiment-analysis
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: astein0/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Anubhav23/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 374.00 +/- 214.89 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bjarlestam -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bjarlestam -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga bjarlestam ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('buffer_size', 70000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.01), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Apisate/DialoGPT-small-jordan
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: reinforce-pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 116.10 +/- 69.09 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Apisate/Discord-Ai-Bot
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- tags: - spacy - text-classification language: - en model-index: - name: en_textcat_sales results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_textcat_sales` | | **Version** | `0.0.0` | | **spaCy** | `>=3.4.3,<3.5.0` | | **Default Pipeline** | `textcat` | | **Components** | `textcat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (2 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat`** | `OTHER`, `2100 - Sales` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 83.00 | | `CATS_MICRO_P` | 95.13 | | `CATS_MICRO_R` | 95.13 | | `CATS_MICRO_F` | 95.13 | | `CATS_MACRO_P` | 94.91 | | `CATS_MACRO_R` | 76.76 | | `CATS_MACRO_F` | 83.00 | | `CATS_MACRO_AUC` | 91.29 | | `CATS_MACRO_AUC_PER_TYPE` | 0.00 | | `TEXTCAT_LOSS` | 473.84 |
Aplinxy9plin/toxic-detection-rus
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="PeterDerLustige/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Apoorva/k2t-test
[ "pytorch", "t5", "text2text-generation", "en", "transformers", "keytotext", "k2t", "Keywords to Sentences", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
7
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 540 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 540, "warmup_steps": 54, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
ArBert/albert-base-v2-finetuned-ner-agglo-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('AliBuildsAI/sd-class-butterflies-64') image = pipeline().images[0] image ```
ArBert/albert-base-v2-finetuned-ner-gmm-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: QTableTaxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="PeterDerLustige/QTableTaxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ArBert/albert-base-v2-finetuned-ner-kmeans-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2023-01-30T23:44:16Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - samsum metrics: - rouge model-index: - name: flan-t5-xl-samsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: samsum type: samsum config: samsum split: test args: samsum metrics: - name: Rouge1 type: rouge value: 49.0281 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-xl-samsum This model is a fine-tuned version of [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 49.0281 - Rouge2: 25.8273 - Rougel: 41.7919 - Rougelsum: 45.2608 - Gen Len: 16.6874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.0 | 1.0 | 3683 | nan | 49.0281 | 25.8273 | 41.7919 | 45.2608 | 16.6874 | | 0.0 | 2.0 | 7366 | nan | 49.0281 | 25.8273 | 41.7919 | 45.2608 | 16.6874 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.12.0+cu116 - Datasets 2.9.0 - Tokenizers 0.12.1
ArBert/albert-base-v2-finetuned-ner-kmeans
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-01-30T23:44:21Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: sohm/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ArBert/albert-base-v2-finetuned-ner
[ "pytorch", "tensorboard", "albert", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
19
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="astein0/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ArBert/bert-base-uncased-finetuned-ner-kmeans
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="astein0/q-Taxi-v1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ArBert/roberta-base-finetuned-ner-agglo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - spacy - token-classification language: - grc model-index: - name: grc_dep_treebanks_sm results: - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.7489347434 - task: name: POS type: token-classification metrics: - name: POS (UPOS) Accuracy type: accuracy value: 0.9084177436 - task: name: MORPH type: token-classification metrics: - name: Morph (UFeats) Accuracy type: accuracy value: 0.8172292389 - task: name: LEMMA type: token-classification metrics: - name: Lemma Accuracy type: accuracy value: 0.9341452854 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.700334588 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.6250336072 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.6895810956 --- | Feature | Description | | --- | --- | | **Name** | `grc_dep_treebanks_sm` | | **Version** | `0.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `trainable_lemmatizer`, `frequency_lemmatizer` | | **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `trainable_lemmatizer`, `frequency_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [marton & jan]() | ### Label Scheme <details> <summary>View label scheme (2299 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `---------`, `--p---fa-`, `--s---ma-`, `-3paia---`, `-3paim---`, `-3siia---`, `A-`, `C-`, `Df`, `Dq`, `Du`, `F-`, `G-`, `I-`, `Ma`, `Mo`, `Nb`, `Ne`, `Pc`, `Pd`, `Pi`, `Pk`, `Pp`, `Pr`, `Ps`, `Px`, `R-`, `S-`, `V-`, `a--------`, `a-------s`, `a-d---fa-`, `a-d---fd-`, `a-d---fg-`, `a-d---fn-`, `a-d---ma-`, `a-d---md-`, `a-d---mg-`, `a-d---mn-`, `a-d---mnc`, `a-d---mv-`, `a-d---na-`, `a-d---ng-`, `a-d---nn-`, `a-p----dc`, `a-p---fa-`, `a-p---fac`, `a-p---fas`, `a-p---fd-`, `a-p---fdc`, `a-p---fds`, `a-p---fg-`, `a-p---fgc`, `a-p---fn-`, `a-p---fnc`, `a-p---fns`, `a-p---fv-`, `a-p---m--`, `a-p---m-c`, `a-p---ma-`, `a-p---mac`, `a-p---mas`, `a-p---md-`, `a-p---mdc`, `a-p---mds`, `a-p---mg-`, `a-p---mgc`, `a-p---mgs`, `a-p---mn-`, `a-p---mnc`, `a-p---mns`, `a-p---mv-`, `a-p---mvs`, `a-p---na-`, `a-p---nac`, `a-p---nas`, `a-p---nd-`, `a-p---ndc`, `a-p---nds`, `a-p---ng-`, `a-p---ngs`, `a-p---nn-`, `a-p---nnc`, `a-p---nns`, `a-p---nv-`, `a-s----d-`, `a-s----dc`, `a-s----g-`, `a-s----gc`, `a-s---fa-`, `a-s---fac`, `a-s---fas`, `a-s---fd-`, `a-s---fds`, `a-s---fg-`, `a-s---fgc`, `a-s---fgs`, `a-s---fn-`, `a-s---fnc`, `a-s---fns`, `a-s---fv-`, `a-s---m--`, `a-s---ma-`, `a-s---mac`, `a-s---mas`, `a-s---md-`, `a-s---mdc`, `a-s---mds`, `a-s---mg-`, `a-s---mgc`, `a-s---mgs`, `a-s---mn-`, `a-s---mnc`, `a-s---mns`, `a-s---mv-`, `a-s---mvc`, `a-s---mvs`, `a-s---na-`, `a-s---nac`, `a-s---nas`, `a-s---nd-`, `a-s---ndc`, `a-s---nds`, `a-s---ng-`, `a-s---nn-`, `a-s---nnc`, `a-s---nns`, `a-s---nv-`, `a-s---nvs`, `c--------`, `d--------`, `d-------c`, `d-------s`, `g--------`, `i--------`, `l--------`, `l-d---fa-`, `l-d---fg-`, `l-d---mg-`, `l-d---mn-`, `l-d---na-`, `l-d---nn-`, `l-p---fa-`, `l-p---fd-`, `l-p---fg-`, `l-p---fn-`, `l-p---ma-`, `l-p---md-`, `l-p---mg-`, `l-p---mn-`, `l-p---na-`, `l-p---nd-`, `l-p---ng-`, `l-p---nn-`, `l-s---fa-`, `l-s---fd-`, `l-s---fg-`, `l-s---fn-`, `l-s---ma-`, `l-s---md-`, `l-s---mg-`, `l-s---mn-`, `l-s---na-`, `l-s---nd-`, `l-s---ng-`, `l-s---nn-`, `m--------`, `m-p---m--`, `m-p---md-`, `m-p---nn-`, `n-----fg-`, `n-----na-`, `n-----nn-`, `n-d----a-`, `n-d---fa-`, `n-d---fd-`, `n-d---fg-`, `n-d---fn-`, `n-d---ma-`, `n-d---md-`, `n-d---mg-`, `n-d---mn-`, `n-d---mv-`, `n-d---na-`, `n-d---nn-`, `n-p----d-`, `n-p----g-`, `n-p---fa-`, `n-p---fd-`, `n-p---fg-`, `n-p---fn-`, `n-p---fv-`, `n-p---ma-`, `n-p---md-`, `n-p---mg-`, `n-p---mn-`, `n-p---mv-`, `n-p---na-`, `n-p---nd-`, `n-p---ng-`, `n-p---nn-`, `n-p---nv-`, `n-s----d-`, `n-s----g-`, `n-s----n-`, `n-s----v-`, `n-s---fa-`, `n-s---fd-`, `n-s---fg-`, `n-s---fn-`, `n-s---fv-`, `n-s---m--`, `n-s---ma-`, `n-s---md-`, `n-s---mg-`, `n-s---mn-`, `n-s---mv-`, `n-s---na-`, `n-s---nd-`, `n-s---ng-`, `n-s---nn-`, `n-s---nv-`, `p--------`, `p-d----d-`, `p-d----n-`, `p-d---fa-`, `p-d---fd-`, `p-d---fg-`, `p-d---fn-`, `p-d---ma-`, `p-d---md-`, `p-d---mg-`, `p-d---mn-`, `p-d---mv-`, `p-p----a-`, `p-p----d-`, `p-p----g-`, `p-p----n-`, `p-p---fa-`, `p-p---fd-`, `p-p---fg-`, `p-p---fn-`, `p-p---ma-`, `p-p---md-`, `p-p---mg-`, `p-p---mn-`, `p-p---na-`, `p-p---nd-`, `p-p---ng-`, `p-p---nn-`, `p-s----a-`, `p-s----d-`, `p-s----g-`, `p-s----n-`, `p-s---fa-`, `p-s---fd-`, `p-s---fg-`, `p-s---fn-`, `p-s---ma-`, `p-s---md-`, `p-s---mg-`, `p-s---mn-`, `p-s---mv-`, `p-s---na-`, `p-s---nd-`, `p-s---ng-`, `p-s---nn-`, `p1p---fa-`, `p1p---ma-`, `p1p---md-`, `p1p---mg-`, `p1p---mn-`, `p1s---fa-`, `p1s---fd-`, `p1s---fg-`, `p1s---fn-`, `p1s---ma-`, `p1s---md-`, `p1s---mg-`, `p1s---mn-`, `p2p----a-`, `p2p----d-`, `p2p---ma-`, `p2p---mg-`, `p2p---mn-`, `p2s----a-`, `p2s----d-`, `p2s----g-`, `p2s----n-`, `p2s---ma-`, `p2s---md-`, `p2s---mg-`, `p3s---fa-`, `p3s---ma-`, `r--------`, `u--------`, `v---na---`, `v--amm---`, `v--an----`, `v--ana---`, `v--ane---`, `v--anm---`, `v--anp---`, `v--fna---`, `v--fne---`, `v--fnm---`, `v--fnp---`, `v--pna---`, `v--pnd---`, `v--pne---`, `v--pnp---`, `v--ppefa-`, `v--ppemn-`, `v--rn----`, `v--rna---`, `v--rne---`, `v--rnp---`, `v--tna---`, `v-dapafn-`, `v-dapama-`, `v-dapamg-`, `v-dapamn-`, `v-dapmfn-`, `v-dapmmn-`, `v-dappma-`, `v-dappmn-`, `v-dppafg-`, `v-dppama-`, `v-dppamn-`, `v-dppefn-`, `v-dppema-`, `v-dppemd-`, `v-dppemn-`, `v-dpppmn-`, `v-drpama-`, `v-drpamn-`, `v-drpefn-`, `v-drpemn-`, `v-p-pmma-`, `v-pap-mn-`, `v-papafa-`, `v-papafg-`, `v-papafn-`, `v-papama-`, `v-papamd-`, `v-papamg-`, `v-papamn-`, `v-papana-`, `v-papand-`, `v-papann-`, `v-papefn-`, `v-papema-`, `v-papemn-`, `v-papmfa-`, `v-papmfg-`, `v-papmfn-`, `v-papmma-`, `v-papmmd-`, `v-papmmg-`, `v-papmmn-`, `v-papmna-`, `v-papmng-`, `v-papmnn-`, `v-pappfd-`, `v-pappfg-`, `v-pappfn-`, `v-pappma-`, `v-pappmd-`, `v-pappmg-`, `v-pappmn-`, `v-pappna-`, `v-pappng-`, `v-pappnn-`, `v-pfpama-`, `v-pfpamg-`, `v-pfpamn-`, `v-pfpema-`, `v-pfpemn-`, `v-pfpmfa-`, `v-pfpmfn-`, `v-pfpmma-`, `v-pfpmmd-`, `v-pfpmmg-`, `v-pfpmmn-`, `v-pfpmnn-`, `v-pfppmn-`, `v-ppp-mn-`, `v-pppafa-`, `v-pppafd-`, `v-pppafg-`, `v-pppafn-`, `v-pppafv-`, `v-pppama-`, `v-pppamd-`, `v-pppamg-`, `v-pppamn-`, `v-pppamv-`, `v-pppana-`, `v-pppand-`, `v-pppang-`, `v-pppann-`, `v-pppefa-`, `v-pppefd-`, `v-pppefg-`, `v-pppefn-`, `v-pppefv-`, `v-pppema-`, `v-pppemd-`, `v-pppemg-`, `v-pppemn-`, `v-pppemv-`, `v-pppena-`, `v-pppend-`, `v-pppeng-`, `v-pppenn-`, `v-ppppma-`, `v-ppppmd-`, `v-ppppmn-`, `v-prp-mn-`, `v-prpafa-`, `v-prpafd-`, `v-prpafn-`, `v-prpama-`, `v-prpamd-`, `v-prpamg-`, `v-prpamn-`, `v-prpana-`, `v-prpang-`, `v-prpefa-`, `v-prpefd-`, `v-prpefg-`, `v-prpefn-`, `v-prpema-`, `v-prpemd-`, `v-prpemg-`, `v-prpemn-`, `v-prpena-`, `v-prpend-`, `v-prpeng-`, `v-prpenn-`, `v-prppfn-`, `v-prppmn-`, `v-sagamn-`, `v-saiamn-`, `v-samp---`, `v-sap-mg-`, `v-sap-mn-`, `v-sapafa-`, `v-sapafd-`, `v-sapafg-`, `v-sapafn-`, `v-sapama-`, `v-sapamd-`, `v-sapamg-`, `v-sapamn-`, `v-sapamv-`, `v-sapana-`, `v-sapang-`, `v-sapann-`, `v-sapanv-`, `v-sapema-`, `v-sapemn-`, `v-sapmfa-`, `v-sapmfd-`, `v-sapmfg-`, `v-sapmfn-`, `v-sapmma-`, `v-sapmmd-`, `v-sapmmg-`, `v-sapmmn-`, `v-sapmna-`, `v-sapmng-`, `v-sapmnn-`, `v-sappfa-`, `v-sappfd-`, `v-sappfg-`, `v-sappfn-`, `v-sappma-`, `v-sappmd-`, `v-sappmg-`, `v-sappmn-`, `v-sappna-`, `v-sappng-`, `v-sappnn-`, `v-sappnv-`, `v-sfpafa-`, `v-sfpafd-`, `v-sfpafn-`, `v-sfpama-`, `v-sfpamd-`, `v-sfpamg-`, `v-sfpamn-`, `v-sfpmfa-`, `v-sfpmfd-`, `v-sfpmfg-`, `v-sfpmfn-`, `v-sfpmma-`, `v-sfpmmg-`, `v-sfpmmn-`, `v-sfpmna-`, `v-sfppma-`, `v-spiamn-`, `v-spp-mn-`, `v-spp-nn-`, `v-sppa---`, `v-sppafa-`, `v-sppafd-`, `v-sppafg-`, `v-sppafn-`, `v-sppafv-`, `v-sppama-`, `v-sppamd-`, `v-sppamg-`, `v-sppamn-`, `v-sppamv-`, `v-sppana-`, `v-sppand-`, `v-sppang-`, `v-sppann-`, `v-sppanv-`, `v-sppefa-`, `v-sppefd-`, `v-sppefg-`, `v-sppefn-`, `v-sppema-`, `v-sppemd-`, `v-sppemg-`, `v-sppemn-`, `v-sppemv-`, `v-sppena-`, `v-sppend-`, `v-sppeng-`, `v-sppenn-`, `v-spppfa-`, `v-spppfd-`, `v-spppfg-`, `v-spppfn-`, `v-spppma-`, `v-spppmn-`, `v-srp-mn-`, `v-srpafa-`, `v-srpafd-`, `v-srpafg-`, `v-srpafn-`, `v-srpama-`, `v-srpamd-`, `v-srpamg-`, `v-srpamn-`, `v-srpamv-`, `v-srpana-`, `v-srpand-`, `v-srpang-`, `v-srpann-`, `v-srpefa-`, `v-srpefd-`, `v-srpefg-`, `v-srpefn-`, `v-srpema-`, `v-srpemd-`, `v-srpemg-`, `v-srpemn-`, `v-srpemv-`, `v-srpena-`, `v-srpend-`, `v-srpeng-`, `v-srpenn-`, `v-srppfn-`, `v-srppma-`, `v-srppmn-`, `v-srppmv-`, `v1paia---`, `v1paim---`, `v1paip---`, `v1paoa---`, `v1paom---`, `v1paop---`, `v1pasa---`, `v1pase---`, `v1pasm---`, `v1pasp---`, `v1pfia---`, `v1pfim---`, `v1pfom---`, `v1piia---`, `v1piie---`, `v1plia---`, `v1plie---`, `v1ppia---`, `v1ppie---`, `v1ppip---`, `v1ppoa---`, `v1ppoe---`, `v1ppsa---`, `v1ppse---`, `v1pria---`, `v1prie---`, `v1prsa---`, `v1prse---`, `v1ptie---`, `v1s-sa---`, `v1sa-a---`, `v1saia---`, `v1saie---`, `v1saim---`, `v1saip---`, `v1sao----`, `v1saoa---`, `v1saoe---`, `v1saom---`, `v1saop---`, `v1sasa---`, `v1sase---`, `v1sasm---`, `v1sasp---`, `v1sfi----`, `v1sfia---`, `v1sfie---`, `v1sfim---`, `v1sfip---`, `v1siia---`, `v1siie---`, `v1slia---`, `v1slie---`, `v1slim---`, `v1spia---`, `v1spie---`, `v1spoa---`, `v1spoe---`, `v1spsa---`, `v1spse---`, `v1sria---`, `v1srie---`, `v1sroa---`, `v1sroe---`, `v1srsa---`, `v1stie---`, `v1stim---`, `v2daia---`, `v2dama---`, `v2dasa---`, `v2dase---`, `v2dfia---`, `v2dfim---`, `v2diia---`, `v2diie---`, `v2dpia---`, `v2dpma---`, `v2dpme---`, `v2dria---`, `v2drma---`, `v2paia---`, `v2paim---`, `v2paip---`, `v2pama---`, `v2pame---`, `v2pamm---`, `v2paoa---`, `v2paom---`, `v2paop---`, `v2pasa---`, `v2pase---`, `v2pasm---`, `v2pasp---`, `v2pfia---`, `v2pfim---`, `v2piia---`, `v2piie---`, `v2ppia---`, `v2ppie---`, `v2ppma---`, `v2ppme---`, `v2ppoa---`, `v2ppoe---`, `v2ppsa---`, `v2pria---`, `v2prie---`, `v2prma---`, `v2prmp---`, `v2proa---`, `v2prsa---`, `v2saia---`, `v2saie---`, `v2saim---`, `v2saip---`, `v2sam----`, `v2sama---`, `v2same---`, `v2samm---`, `v2samp---`, `v2saoa---`, `v2saoe---`, `v2saom---`, `v2saop---`, `v2sasa---`, `v2sase---`, `v2sasm---`, `v2sasp---`, `v2sfi----`, `v2sfia---`, `v2sfie---`, `v2sfim---`, `v2sfip---`, `v2siia---`, `v2siie---`, `v2siip---`, `v2slia---`, `v2slie---`, `v2slim---`, `v2spia---`, `v2spie---`, `v2spma---`, `v2spme---`, `v2spoa---`, `v2spoe---`, `v2spsa---`, `v2spse---`, `v2sria---`, `v2srie---`, `v2srma---`, `v2srme---`, `v2sroa---`, `v2srsa---`, `v2stie---`, `v3-roe---`, `v3daia---`, `v3daim---`, `v3daip---`, `v3daoa---`, `v3dfia---`, `v3dfim---`, `v3diia---`, `v3diie---`, `v3dlia---`, `v3dlie---`, `v3dlim---`, `v3dpia---`, `v3dpie---`, `v3dpma---`, `v3dpme---`, `v3dpsa---`, `v3dria---`, `v3pai----`, `v3paia---`, `v3paie---`, `v3paim---`, `v3paip---`, `v3pamm---`, `v3paoa---`, `v3paoe---`, `v3paom---`, `v3paop---`, `v3pasa---`, `v3pase---`, `v3pasm---`, `v3pasp---`, `v3pfia---`, `v3pfie---`, `v3pfim---`, `v3piia---`, `v3piie---`, `v3piip---`, `v3plia---`, `v3plie---`, `v3plim---`, `v3plip---`, `v3ppia---`, `v3ppie---`, `v3ppip---`, `v3ppma---`, `v3ppme---`, `v3ppoa---`, `v3ppoe---`, `v3ppsa---`, `v3ppse---`, `v3pria---`, `v3prie---`, `v3prip---`, `v3sai----`, `v3saia---`, `v3saie---`, `v3saim---`, `v3saip---`, `v3sama---`, `v3samm---`, `v3samp---`, `v3sana---`, `v3sao----`, `v3saoa---`, `v3saoe---`, `v3saom---`, `v3saop---`, `v3sas----`, `v3sasa---`, `v3sase---`, `v3sasm---`, `v3sasp---`, `v3sfi----`, `v3sfia---`, `v3sfie---`, `v3sfim---`, `v3sfip---`, `v3sfoa---`, `v3sii----`, `v3siia---`, `v3siie---`, `v3siip---`, `v3sli----`, `v3slia---`, `v3slie---`, `v3slim---`, `v3slip---`, `v3spia---`, `v3spie---`, `v3spip---`, `v3spma---`, `v3spme---`, `v3spoa---`, `v3spoe---`, `v3spop---`, `v3spsa---`, `v3spse---`, `v3sria---`, `v3srie---`, `v3srip---`, `v3srma---`, `v3sroa---`, `v3srsa---`, `v3stie---`, `v3stim---`, `v3stip---`, `x--------`, `x-p----d-`, `x-p---nn-` | | **`morphologizer`** | `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET`, `POS=SCONJ`, `POS=CCONJ`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `POS=ADP`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Definite=Def\|Gender=Masc,Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=ADV`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Inf\|Voice=Mid`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid,Pass`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=VERB\|Tense=Pres\|VerbForm=Inf\|Voice=Mid,Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Pos\|Gender=Masc,Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET`, `POS=VERB\|Tense=Pres\|VerbForm=Inf\|Voice=Mid`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Inf\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Masc,Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=AUX\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=ADJ\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Degree=Sup\|POS=ADV`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=VERB\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Inf\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `POS=ADV\|Polarity=Neg`, `Case=Acc\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Mood=Opt\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Mid,Pass`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=VERB\|Tense=Fut\|VerbForm=Inf\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid,Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Definite=Def\|Gender=Masc,Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid,Pass`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid,Pass`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Degree=Pos\|POS=ADV`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=ADJ\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc,Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=NUM`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc,Neut\|Number=Sing\|POS=DET`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|Tense=Pres\|VerbForm=Inf\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Fem,Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=DET`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Voc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=2\|Poss=Yes`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `POS=INTJ`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `POS=ADV\|PronType=Rel`, `Aspect=Perf\|Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|Tense=Fut\|VerbForm=Inf\|Voice=Mid`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Degree=Pos\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Masc,Neut\|Number=Plur\|POS=DET`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=ADJ\|PronType=Dem`, `Degree=Cmp\|POS=ADV`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=DET`, `POS=ADV\|PronType=Int`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Masc,Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Opt\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ\|Person=1\|Poss=Yes`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem,Masc\|Number=Sing\|POS=DET`, `Case=Dat\|Gender=Masc,Neut\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Dat\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc,Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `POS=AUX\|Tense=Fut\|VerbForm=Inf\|Voice=Mid`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Gen\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Degree=Pos\|Gender=Fem,Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ\|Person=2\|Poss=Yes`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Masc,Neut\|Number=Plur\|POS=DET`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Acc\|Degree=Pos\|Gender=Masc,Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Fem,Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET`, `Aspect=Perf\|Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Case=Gen\|Gender=Masc,Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=DET`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem,Masc\|Number=Sing\|POS=DET`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Int`, `Aspect=Perf\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Fem,Masc\|Number=Plur\|POS=NUM`, `Aspect=Perf\|Case=Dat\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Opt\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Dat\|Gender=Masc,Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Opt\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Degree=Cmp\|Gender=Fem,Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Case=Acc\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Fem,Masc\|Number=Plur\|POS=DET`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=ADJ\|Person=1\|Poss=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Masc,Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Gen\|Definite=Def\|Gender=Masc,Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Gender=Fem,Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Gender=Fem,Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=3\|Poss=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Person=3\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Person=3\|Poss=Yes`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NUM`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Mid`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Degree=Cmp\|Gender=Fem,Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Degree=Sup\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=2\|Poss=Yes`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=ADJ\|Person=2\|Poss=Yes`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=ADJ\|Person=1\|Poss=Yes`, `Aspect=Perf\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Masc,Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Degree=Pos\|Gender=Fem,Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Dat\|Gender=Masc,Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Number=Plur\|POS=ADJ\|PronType=Dem`, `Case=Acc\|Degree=Sup\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=ADJ\|PronType=Dem`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid,Pass`, `Case=Dat\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc,Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Masc,Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=ADJ\|PronType=Dem`, `Aspect=Perf\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Acc\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Mood=Opt\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Int`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Voc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|VerbForm=Gdv`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=ADJ\|PronType=Dem`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=ADJ\|PronType=Dem`, `Case=Gen\|Gender=Masc,Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Gen\|Gender=Fem,Masc\|Number=Plur\|POS=NUM`, `Case=Dat\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADJ\|Person=1\|Poss=Yes`, `Aspect=Perf\|Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `POS=PROPN`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid,Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADJ\|Person=2\|Poss=Yes`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Person=2\|Poss=Yes`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ\|Person=2\|Poss=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Fem,Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=3\|Poss=Yes`, `Aspect=Perf\|Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Dat\|Definite=Def\|Gender=Masc,Neut\|Number=Plur\|POS=ADJ\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem,Masc\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Fem,Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Voc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=ADJ\|Person=3\|Poss=Yes`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid,Pass`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Fem,Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=ADJ\|PronType=Dem`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=3\|Poss=Yes`, `Case=Acc\|Gender=Masc,Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid,Pass`, `Case=Gen\|Degree=Pos\|Gender=Fem,Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Person=2\|Poss=Yes`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=ADJ\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=ADJ\|Person=2\|Poss=Yes`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ\|Person=3\|Poss=Yes`, `Case=Dat\|Gender=Masc,Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Degree=Cmp\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ`, `POS=VERB\|Tense=Fut\|VerbForm=Inf\|Voice=Pass`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Aspect=Imp\|Mood=Opt\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid,Pass`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Person=3\|Poss=Yes`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `POS=VERB`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADJ\|Person=3\|Poss=Yes`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=ADJ\|Person=3\|Poss=Yes`, `Case=Dat\|Gender=Fem,Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Mood=Opt\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=ADJ\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ\|Person=1\|Poss=Yes`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|VerbForm=Gdv`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADJ\|Person=2\|Poss=Yes`, `Case=Dat\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ\|Person=1\|Poss=Yes`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ\|Person=2\|Poss=Yes`, `Mood=Opt\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|VerbForm=Gdv`, `Aspect=Perf\|Mood=Opt\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=ADJ\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Voc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Voc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Degree=Pos\|Gender=Fem,Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Aspect=Perf\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Dat\|Degree=Cmp\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem,Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem,Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Person=2\|Poss=Yes`, `Aspect=Perf\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=ADJ\|Person=3\|Poss=Yes`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Mood=Opt\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ\|Person=2\|Poss=Yes`, `Case=Voc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Gender=Fem,Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Opt\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Gdv`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Degree=Cmp\|Gender=Fem,Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=ADJ\|PronType=Dem`, `Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc,Neut\|Number=Plur\|POS=ADJ`, `Mood=Opt\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Fem,Masc\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Fem,Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Voc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Fem,Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Perf\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Fem,Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Gender=Fem,Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Fem,Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Fem,Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Perf\|Case=Acc\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Number=Plur\|POS=NUM`, `POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Voc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `POS=X`, `Case=Nom\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Nom\|Gender=Fem,Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem,Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem,Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Gender=Masc,Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=DET`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ\|Person=2\|Poss=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=ADJ\|Person=2\|Poss=Yes`, `Case=Gen\|Gender=Fem,Masc\|Number=Plur\|POS=DET`, `Case=Gen\|Gender=Fem,Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid,Pass`, `Aspect=Perf\|Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=ADJ\|Person=1\|Poss=Yes`, `Case=Dat\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Gender=Masc,Neut\|Number=Plur\|POS=ADJ`, `Case=Voc\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Masc,Neut\|Number=Plur\|POS=NUM`, `Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Gender=Fem,Masc\|Number=Plur\|POS=DET`, `Aspect=Perf\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Sup\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Case=Voc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=ADJ\|Person=2\|Poss=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Aspect=Perf\|POS=AUX\|Tense=Past\|VerbForm=Inf\|Voice=Mid`, `Aspect=Perf\|Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Aspect=Perf\|Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ\|Person=2\|Poss=Yes`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=ADJ\|Person=2\|Poss=Yes`, `Case=Dat\|Gender=Masc,Neut\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem,Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=ADJ\|PronType=Dem`, `Case=Gen\|Gender=Fem,Masc\|Number=Sing\|POS=DET`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=ADJ\|PronType=Dem`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Masc,Neut\|Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Voc\|Degree=Pos\|Gender=Fem,Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=2\|Poss=Yes`, `Aspect=Perf\|Case=Dat\|Gender=Masc,Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Number=Sing\|POS=DET`, `Case=Dat\|Gender=Fem,Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem,Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem,Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem,Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Fem,Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Dat\|Gender=Masc,Neut\|Number=Plur\|POS=ADJ\|Person=1\|Poss=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid,Pass`, `Case=Nom\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid,Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Mid,Pass`, `Case=Dat\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid,Pass`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Mid,Pass`, `Case=Dat\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Voc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Case=Dat\|Gender=Masc,Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc,Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Aspect=Perf\|POS=AUX\|Tense=Past\|VerbForm=Inf\|Voice=Act`, `POS=VERB\|Tense=Past\|VerbForm=Inf\|Voice=Act`, `POS=PUNCT`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON`, `POS=VERB\|Tense=Past\|VerbForm=Inf\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `POS=VERB\|Tense=Past\|VerbForm=Inf\|Voice=Mid`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `POS=PART`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON`, `POS=DET`, `Case=Gen\|Number=Sing\|POS=PRON`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part`, `Case=Nom\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Number=Sing\|POS=PRON`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `POS=PRON`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Voc\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Masc\|Number=Dual\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Dual\|POS=PRON`, `Case=Nom\|Gender=Masc\|Number=Dual\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Dual\|POS=DET`, `Case=Nom\|Gender=Masc\|Number=Dual\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Dual\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Dual\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Dual\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Dual\|POS=PRON`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Dual\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Fem\|Number=Dual\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Dual\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Gender=Masc\|Number=Dual\|POS=DET`, `Case=Gen\|Gender=Masc\|Number=Dual\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Dual\|POS=PRON`, `Case=Nom\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Dual\|POS=ADJ`, `Mood=Ind\|Number=Dual\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=X`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Dual\|POS=DET`, `Case=Acc\|Gender=Fem\|Number=Dual\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Dual\|POS=PRON`, `Case=Acc\|Gender=Fem\|Number=Dual\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Number=Sing\|POS=PRON`, `Case=Voc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Number=Sing\|POS=NOUN`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `POS=VERB\|Tense=Past\|VerbForm=Inf`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Gender=Fem\|Number=Dual\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Dual\|POS=PRON`, `Mood=Ind\|Number=Dual\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=X`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Gender=Neut\|Number=Dual\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Dual\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Voc\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Number=Plur\|POS=PRON`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Voc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Dual\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Gender=Fem\|Number=Dual\|POS=PRON`, `Case=Gen\|Gender=Fem\|Number=Dual\|POS=DET`, `Case=Gen\|Gender=Fem\|Number=Dual\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Dual\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Imp\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Imp\|Number=Dual\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Gender=Masc\|Number=Dual\|POS=NOUN`, `Mood=Ind\|Number=Dual\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Dual\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Dual\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Voc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Voc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Dual\|POS=PRON`, `Case=Nom\|Gender=Neut\|Number=Dual\|POS=NOUN`, `Mood=Ind\|Number=Dual\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Dual\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Dual\|POS=ADJ`, `Mood=Ind\|Number=Dual\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Mood=Imp\|Number=Dual\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Gender=Masc\|Number=Dual\|POS=PRON`, `Case=Nom\|Number=Dual\|POS=PRON`, `Mood=Ind\|Number=Dual\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|POS=NOUN`, `Case=Acc\|Gender=Neut\|POS=NOUN`, `Case=Dat\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Voc\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Case=Voc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Dual\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Dual\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Dual\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Dual\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Acc\|Gender=Neut\|Number=Dual\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Dual\|POS=PRON`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Dual\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Dual\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Dual\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pqp\|VerbForm=Fin\|Voice=Mid`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Mood=Ind\|Number=Dual\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Mood=Sub\|Number=Dual\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Dual\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Voc\|Gender=Masc\|Number=Dual\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Inf`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Dual\|POS=DET`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Acc\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=X\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Dual\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin\|Voice=Mid`, `Case=Acc\|Gender=Neut\|Number=Dual\|POS=DET`, `Case=Nom\|Gender=Neut\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|POS=VERB\|Tense=Fut\|VerbForm=Inf\|Voice=Act`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Case=Voc\|Gender=Masc\|Number=Dual\|POS=ADJ`, `Mood=Sub\|Number=Dual\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Dual\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=X\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Dual\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Opt\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Mood=Imp\|Number=Dual\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Voc\|Gender=Masc\|Number=Dual\|POS=PRON`, `Mood=Sub\|Number=Dual\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Mood=Imp\|Number=Dual\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Mood=Imp\|Number=Dual\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Opt\|Number=Dual\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=X\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Voc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Aspect=Imp\|Mood=Ind\|Number=Dual\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Number=Plur\|POS=PRON`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Ind\|Number=Dual\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Imp\|Number=Dual\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Dual\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Mood=Ind\|Number=Dual\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Mood=Opt\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=2`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1`, `Degree=Sup\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2`, `Case=Nom\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Plur\|POS=X`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `Case=Voc\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Mid`, `POS=VERB\|Tense=Pres\|VerbForm=Inf`, `Aspect=Perf\|Case=Voc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Dual\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Acc\|Number=Dual\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Dual\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Inf\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=X`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Mid`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Mid`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Gender=Fem\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Mid`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `case`, `cc`, `ccomp`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `dislocated`, `fixed`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp` | </details> ### Accuracy | Type | Score | | --- | --- | | `TAG_ACC` | 74.89 | | `POS_ACC` | 90.84 | | `MORPH_ACC` | 81.72 | | `DEP_UAS` | 70.03 | | `DEP_LAS` | 62.50 | | `SENTS_P` | 64.23 | | `SENTS_R` | 74.44 | | `SENTS_F` | 68.96 | | `LEMMA_ACC` | 93.41 | | `TOK2VEC_LOSS` | 6208385.96 | | `TAGGER_LOSS` | 1911090.81 | | `MORPHOLOGIZER_LOSS` | 2730747.30 | | `PARSER_LOSS` | 8620412.55 | | `TRAINABLE_LEMMATIZER_LOSS` | 760189.68 |
ArBert/roberta-base-finetuned-ner-kmeans-twitter
[ "pytorch", "tensorboard", "roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2023-01-31T00:34:58Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 540 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 540, "warmup_steps": 54, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
ArBert/roberta-base-finetuned-ner-kmeans
[ "pytorch", "tensorboard", "roberta", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: cc-by-nc-sa-4.0 language: - en library_name: diffusers tags: - stable-diffusion - text-to-image widget: - text: >- (yanyuan), 1girl, masterpiece, best quality, beautiful detailed sky, snowy street, [smile], dynamic angle, full body, flat chest, volume light, [red eyes] example_title: Yan Yuan --- # YanYuan-v1-dreambooth ## 下载 - [ckpt](./yanyuan_v1_dreambooth_clip2_5k_fp16.ckpt) - [safetensors](./yanyuan_v1_dreambooth_clip2_5k_fp16.safetensors) ## 预览图 ![preview_1](./preview_1.png) ```text (yanyuan), 1girl, masterpiece, best quality, beautiful detailed sky, snowy street, [smile], dynamic angle, full body, flat chest, volume light, [red eyes] Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10.5, Seed: 1497252609, Size: 512x384, Model hash: 5c27fc14ed, Denoising strength: 0.6, Clip skip: 2, ENSD: 31339, Hires upscale: 2, Hires steps: 20, Hires upscaler: Latent ``` ![preview_2](./preview_2.png) ```text (yanyuan), 1girl, masterpiece, best quality, beautiful detailed sky, beach, sunset, [smile], dynamic angle, full body, flat chest, volume light, [red eyes] Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10.5, Seed: 1962072237, Size: 512x384, Model hash: 5c27fc14ed, Denoising strength: 0.6, Clip skip: 2, ENSD: 31339, Hires upscale: 2, Hires steps: 20, Hires upscaler: Latent ``` ## 推荐起手式 ```text (yanyuan), 1girl, masterpiece, best quality ``` ## 更多信息 见总仓库:[of_diffusion](https://huggingface.co/wybxc/of_diffusion)。
ArJakusz/DialoGPT-small-starky
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-31T00:40:28Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generatioon. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('tadisettiraju/raju_diffusion') image = pipeline().images[0] image ```
Aracatto/Catto
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer datasets: - dataset metrics: - accuracy - f1 - precision - recall model-index: - name: dccuchile-distilbert-base-spanish-uncased-finetuned-with-spanish-tweets-clf results: - task: name: Text Classification type: text-classification dataset: name: dataset type: dataset config: 60-20-20 split: dev args: 60-20-20 metrics: - name: Accuracy type: accuracy value: 0.6620594333102972 - name: F1 type: f1 value: 0.6612467747613665 - name: Precision type: precision value: 0.6698857111668722 - name: Recall type: recall value: 0.6593066578581652 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dccuchile-distilbert-base-spanish-uncased-finetuned-with-spanish-tweets-clf This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased) on the dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.5271 - Accuracy: 0.6621 - F1: 0.6612 - Precision: 0.6699 - Recall: 0.6593 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.8515 | 1.0 | 543 | 0.7681 | 0.6565 | 0.6374 | 0.6491 | 0.6363 | | 0.5505 | 2.0 | 1086 | 0.8494 | 0.6489 | 0.6431 | 0.6713 | 0.6357 | | 0.3302 | 3.0 | 1629 | 1.2386 | 0.6662 | 0.6640 | 0.6667 | 0.6653 | | 0.1675 | 4.0 | 2172 | 1.5271 | 0.6621 | 0.6612 | 0.6699 | 0.6593 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1 - Datasets 2.8.0 - Tokenizers 0.13.2
Araf/Ummah
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - dataset metrics: - accuracy - f1 - precision - recall model-index: - name: distilbert-base-multilingual-cased-finetuned-with-spanish-tweets-clf results: - task: name: Text Classification type: text-classification dataset: name: dataset type: dataset config: 60-20-20 split: dev args: 60-20-20 metrics: - name: Accuracy type: accuracy value: 0.6005528680027643 - name: F1 type: f1 value: 0.5980973383983778 - name: Precision type: precision value: 0.6008849518067042 - name: Recall type: recall value: 0.5962561389203832 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-with-spanish-tweets-clf This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.4692 - Accuracy: 0.6006 - F1: 0.5981 - Precision: 0.6009 - Recall: 0.5963 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.0168 | 1.0 | 543 | 0.9144 | 0.5563 | 0.5012 | 0.5240 | 0.5251 | | 0.8197 | 2.0 | 1086 | 0.9133 | 0.5764 | 0.5476 | 0.5815 | 0.5462 | | 0.5574 | 3.0 | 1629 | 1.0629 | 0.6151 | 0.6150 | 0.6227 | 0.6112 | | 0.3487 | 4.0 | 2172 | 1.4692 | 0.6006 | 0.5981 | 0.6009 | 0.5963 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1 - Datasets 2.8.0 - Tokenizers 0.13.2
AragornII/DialoGPT-small-harrypotter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-31T00:55:49Z
--- license: creativeml-openrail-m tags: - coreml - stable-diffusion - text-to-image --- # Core ML Converted Model: - This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br> - Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br> - `split_einsum` version is compatible with all compute unit options including Neural Engine.<br> - `original` version is only compatible with CPU & GPU option.<br> # Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml). # Openjourney + Robo-Diffusion Merge: Source(s): [CivitAI](https://civitai.com/models/1214/openjourney-robo-diffusion-merge) It's just a merge of Robo-Diffusion, Openjourney (Midjourney-V4), and SD v1.5.
Aran/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - ner - punctuation language: - zh --- # zh-wiki-punctuation-restore More Detail: https://github.com/p208p2002/ZH-Punctuation-Restore 共計支援6種標點符號: , 、 。 ? ! ; ## Install ```bash # pip install torch pytorch-lightning pip install zhpr ``` ## Usage ```python from zhpr.predict import DocumentDataset,merge_stride,decode_pred from transformers import AutoModelForTokenClassification,AutoTokenizer from torch.utils.data import DataLoader def predict_step(batch,model,tokenizer): batch_out = [] batch_input_ids = batch encodings = {'input_ids': batch_input_ids} output = model(**encodings) predicted_token_class_id_batch = output['logits'].argmax(-1) for predicted_token_class_ids, input_ids in zip(predicted_token_class_id_batch, batch_input_ids): out=[] tokens = tokenizer.convert_ids_to_tokens(input_ids) # compute the pad start in input_ids # and also truncate the predict # print(tokenizer.decode(batch_input_ids)) input_ids = input_ids.tolist() try: input_id_pad_start = input_ids.index(tokenizer.pad_token_id) except: input_id_pad_start = len(input_ids) input_ids = input_ids[:input_id_pad_start] tokens = tokens[:input_id_pad_start] # predicted_token_class_ids predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids] predicted_tokens_classes = predicted_tokens_classes[:input_id_pad_start] for token,ner in zip(tokens,predicted_tokens_classes): out.append((token,ner)) batch_out.append(out) return batch_out if __name__ == "__main__": window_size = 256 step = 200 text = "維基百科是維基媒體基金會運營的一個多語言的百科全書目前是全球網路上最大且最受大眾歡迎的參考工具書名列全球二十大最受歡迎的網站特點是自由內容自由編輯與自由著作權" dataset = DocumentDataset(text,window_size=window_size,step=step) dataloader = DataLoader(dataset=dataset,shuffle=False,batch_size=5) model_name = 'p208p2002/zh-wiki-punctuation-restore' model = AutoModelForTokenClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) model_pred_out = [] for batch in dataloader: batch_out = predict_step(batch,model,tokenizer) for out in batch_out: model_pred_out.append(out) merge_pred_result = merge_stride(model_pred_out,step) merge_pred_result_deocde = decode_pred(merge_pred_result) merge_pred_result_deocde = ''.join(merge_pred_result_deocde) print(merge_pred_result_deocde) ``` ``` 維基百科是維基媒體基金會運營的一個多語言的百科全書,目前是全球網路上最大且最受大眾歡迎的參考工具書,名列全球二十大最受歡迎的網站,特點是自由內容、自由編輯與自由著作權。 ```
ArashEsk95/bert-base-uncased-finetuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-labor_space_v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-labor_space_v3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Tokenizers 0.13.2
Aravinth/test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-31T01:26:38Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - https://huggingface.co/erkam/sd-clevr-lora These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the erkam/clevr-with-depth dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
Arcanos/1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-31T01:32:39Z
--- license: mit --- ### mofmof-style on Stable Diffusion This is the `<mofmof>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<mofmof> 0](https://huggingface.co/sd-concepts-library/mofmof-style/resolve/main/concept_images/SSIP-XG023.jpg のコピー) ![<mofmof> 1](https://huggingface.co/sd-concepts-library/mofmof-style/resolve/main/concept_images/SSIP-XG020.jpg のコピー) ![<mofmof> 2](https://huggingface.co/sd-concepts-library/mofmof-style/resolve/main/concept_images/SSIP-XG022.jpg のコピー)
Arcktosh/DialoGPT-small-rick
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- library_name: keras --- The model weights were generated using this tutorial: [Teach StableDiffusion new concepts via Textual Inversion](https://keras.io/examples/generative/fine_tune_via_textual_inversion/).
Arghyad/Loki_small
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer model-index: - name: Vigec-V6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Vigec-V6 This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1176 - eval_bleu: 90.2995 - eval_gen_len: 9.904 - eval_runtime: 72.4913 - eval_samples_per_second: 27.59 - eval_steps_per_second: 3.449 - epoch: 0.97 - step: 40000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 100000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
AriakimTaiyo/kumiko
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: sohm/ppo-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Arina/Erine
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 525.50 +/- 185.07 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dhmeltzer -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dhmeltzer -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dhmeltzer ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
ArjunKadya/HuggingFace
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### UtilityPole Dreambooth model trained by BotsOne with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
ArnaudPannatier/MLPMixer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-01-31T02:52:56Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9183870967741935 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7721 - Accuracy: 0.9184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2896 | 1.0 | 318 | 3.2890 | 0.7432 | | 2.6284 | 2.0 | 636 | 1.8756 | 0.8377 | | 1.5483 | 3.0 | 954 | 1.1572 | 0.8961 | | 1.015 | 4.0 | 1272 | 0.8573 | 0.9132 | | 0.7953 | 5.0 | 1590 | 0.7721 | 0.9184 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Arnold/wav2vec2-hausa2-demo-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 datasets: - bigcode/the-stack language: - code programming_language: - TypeScript pipeline_tag: text-generation ---
Aron/distilbert-base-uncased-finetuned-emotion
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
null
--- language: - en license: mit --- # E5-small-unsupervised **This model is similar to [e5-small](https://huggingface.co/intfloat/e5-small) but without supervised fine-tuning.** [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 384. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small-unsupervised') model = AutoModel.from_pretrained('intfloat/e5-small-unsupervised') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
ArseniyBolotin/bert-multi-PAD-ner
[ "pytorch", "jax", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Ayham/bert_roberta_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- datasets: - squad_it metrics: - squad language: - it license: apache-2.0 tags: - italian - squad_it - question-answering widget: - text: Qual è il soprannome di Vasco Rossi? context: >- Vasco Rossi, noto anche semplicemente come Vasco e in passato con l'appellativo Blasco (Zocca, 7 febbraio 1952), è un cantautore italiano - text: >- La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale? context: >- In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. - text: >- Il Regno Unito e la Francia non hanno avuto interruzioni dell' approvvigionamento petrolifero in quanto non hanno consentito a quale paese di utilizzare il loro aeroporto? context: >- L' embargo non era uniforme in tutta Europa. Dei nove membri della Comunità Economica Europea (CEE), i Paesi Bassi hanno dovuto affrontare un embargo totale, il Regno Unito e la Francia hanno ricevuto forniture quasi ininterrotte (poichè si sono rifiutati di consentire all' America di utilizzare i loro aerodromi e le armi e forniture embargo sia agli arabi che agli israeliani), mentre gli altri sei hanno dovuto affrontare tagli parziali. Il Regno Unito era tradizionalmente un alleato di Israele, e il governo di Harold Wilson ha sostenuto gli israeliani durante la guerra dei sei giorni. Il suo successore, Ted Heath, ribaltò questa politica nel 1970, chiedendo a Israele di ritirarsi ai suoi confini prima del 1967. - context: >- Nel 1962, il grafico Paul Rand ridisegna il logo ABC nella sua forma più conosciuta (e attuale) con le lettere minuscole 'abc' racchiuse in un unico cerchio nero. Il nuovo logo esordisce in onda per le promozioni di ABC all' inizio della stagione 1963-64. Le lettere ricordano fortemente il carattere tipografico Bauhaus disegnato da Herbert Bayer negli anni Venti, ma condividono anche similitudini con diversi altri caratteri, come ITC Avant Garde e Horatio, e lo Chalet più simile. La semplicità del logo ha reso più facile la riprogettazione e la duplicazione, il che ha conferito un beneficio per ABC (soprattutto prima dell' avvento della computer grafica). text: Di quale carattere tipografico ricordano le lettere dell' iconico logo ABC? - context: >- La fotorespirazione può verificarsi quando la concentrazione di ossigeno è troppo elevata. Rubisco non è in grado di distinguere molto bene tra ossigeno e anidride carbonica, quindi può accidentalmente aggiungere O2 invece di CO2 a RuBP. Questo processo riduce l' efficienza della fotosintesi: consuma ATP e ossigeno, rilascia CO2 e non produce zucchero. Può sprecare fino alla metà del carbonio fissato dal ciclo di Calvin. Diversi meccanismi si sono evoluti in diversi lignaggi che aumentano la concentrazione di anidride carbonica rispetto all' ossigeno all' interno del cloroplasto, aumentando l' efficienza della fotosintesi. Questi meccanismi sono chiamati meccanismi di concentrazione dell' anidride carbonica, o CCM. Tra questi figurano il metabolismo degli acidi crassulaceanici, la fissazione del carbonio C4 e i pirenoidi. I cloroplasti negli impianti C4 sono notevoli in quanto presentano un chiaro dimorfismo cloroplastico. text: Che cosa può fare rubisco per errore? model-index: - name: electra-italian-xxl-cased-squad-it results: - task: type: question-answering name: Question Answering dataset: type: squad_it name: SQuAD-IT metrics: - type: exact-match value: 0.66 name: Test Exact Match - type: f1 value: 0.775 name: Test F1 train-eval-index: - config: default task: question-answering task_id: extractive_question_answering splits: eval_split: test col_mapping: context: context question: question answers.text: answers.text answers.answer_start: answers.answer_start pipeline_tag: question-answering library_name: transformers --- # electra-italian-xxl-cased-squad-it Electra model for (Extractive) Question Answering on Italian texts ## Model description This model has been fine-tuned on [squad_it dataset](https://huggingface.co/datasets/squad_it), starting from the pre-trained model [dbmdz/electra-base-italian-xxl-cased-discriminator](https://huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator). It can be used for [Extractive Q&A](https://huggingface.co/tasks/question-answering) on Italian texts. ## Evaluation | Metric | Value | | ------ | --------- | | **EM** | **0.660** | | **F1** | **0.775** | [Evaluation notebook](https://github.com/anakin87/electra-italian-xxl-cased-squad-it/blob/main/evaluation.ipynb) ## Usage in Transformers 🤗 Model checkpoints are available for usage in PyTorch. They can be used directly with pipelines as: ```python from transformers import pipelines qa = pipeline('question-answering', model='anakin87/electra-italian-xxl-cased-squad-it') qa(question="Qual è il soprannome di Vasco Rossi?", context="Vasco Rossi, noto anche semplicemente come Vasco e in passato con l'appellativo Blasco (Zocca, 7 febbraio 1952), è un cantautore italiano") >>> {'score': 0.93, 'start': 80, 'end': 86, 'answer': 'Blasco'} ``` ## Usage in Haystack 🚀🚀🚀 With the [Haystack NLP framework](https://github.com/deepset-ai/haystack), you can use this model and create a scalable Question Answering system that works across millions of documents. For a complete walkthrough, see [this notebook](https://github.com/anakin87/electra-italian-xxl-cased-squad-it/blob/main/usage_in_haystack.ipynb). ```python ... print_answers(prediction, details="medium") >>> Query: Con chi ha parlato di vaccini il premier Mario Draghi? Answers: [ { 'answer': 'Von der Leyen', 'context': " vaccino dell'azienda britannica. Durante la telefonata " 'tra Draghi e Von der Leyen, la presidente della ' 'Commissione Ue ha annunciato al presidente del', 'score': 0.9663902521133423}, { 'answer': 'Ursula Von der Leyen', 'context': 'colloquio telefonico con la presidente della Commissione ' 'europea Ursula Von der Leyen. Secondo fonti di Palazzo ' 'Chigi, dalla conversazione è emerso ch', 'score': 0.9063920974731445}, { 'answer': 'Mario Draghi, ha tenuto un lungo discorso alla 76esima ' 'Assemblea Generale delle Nazioni Unite', 'context': 'Il presidente del Consiglio, Mario Draghi, ha tenuto un ' 'lungo discorso alla 76esima Assemblea Generale delle ' 'Nazioni Unite, nella notte italiana. Tant', 'score': 0.5243796706199646}] ``` ## Comparison ⚖️ | Model | EM | F1 | Model size (PyTorch) | Architecture | |-----------------------------------------------------------|-------|-------|----------------------|------------------| | it5/it5-large-question-answering | 69.10 | 78.00 | 3.13 GB | encoder-decoder | | ***anakin87/electra-italian-xxl-cased-squad-it (this one)*** | *66.03* | *77.47* | *437 MB* | *encoder* | | it5/it5-base-question-answering | 66.30 | 76.10 | 990 MB | encoder-decoder | | it5/mt5-base-question-answering | 66.30 | 75.70 | 2.33 GB | encoder-decoder | | antoniocappiello/bert-base-italian-uncased-squad-it | 63.80 | 75.30 | 440 MB | encoder | | luigisaetta/squad_it_xxl_cased_hub1 | 63.95 | 75.27 | 440 MB | encoder | | it5/it5-efficient-small-el32-question-answering | 64.50 | 74.70 | 569 MB | encoder-decoder | | mrm8488/bert-italian-finedtuned-squadv1-it-alfa | 62.51 | 74.16 | 440 MB | encoder | | mrm8488/umberto-wikipedia-uncased-v1-finetuned-squadv1-it | 60.50 | 72.41 | 443 MB | encoder | | it5/it5-small-question-answering | 61.90 | 71.60 | 308 MB | encoder-decoder | | it5/mt5-small-question-answering | 56.00 | 66.00 | 1.2 GB | encoder-decoder | | DrQA-it trained on SQuAD-it | 56.10 | 65.90 | ? | ? | ## Training details 🏋️‍ [Training notebook](https://github.com/anakin87/electra-italian-xxl-cased-squad-it/blob/main/training.ipynb) **Hyperparameters** - learning_rate: 2e-05 - batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP > Created by [Stefano Fiorucci/anakin87](https://github.com/anakin87) > > Made with <span style="color: #e25555;">&hearts;</span> in Italy